Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 44 Issue 1, 13 November 2018
  
Survey of Cycle Packing Problem
LUO Wei-dong, WANG Jian-xin and FENG Qi-long
Computer Science. 2017, 44 (1): 1-6.  doi:10.11896/j.issn.1002-137X.2017.01.001
Abstract PDF(605KB) ( 566 )   
References | Related Articles | Metrics
Cycle packing problem was first presented by Erds and Pósa.Since then researchers have been studying the problem in graph theory and theoretical computer science.Recently,researchers find that the problem has improtant applications in computational biology,especially for reconstruction of evolutionary trees and genomic analysis.In this paper,an introduction to the research status of this problem was given.First and foremost,some results of cycle packing problem in graph theory were discussed.Then,we analyzed and discussed approximation algorithms,parameterized algorithms,parameterized complexity and inapproximability of the problem.At last,some further research directions on this problem were given.
SHMA:Monitoring Architecture for Clouds
CHEN Lin, YING Shi and JIA Xiang-yang
Computer Science. 2017, 44 (1): 7-12.  doi:10.11896/j.issn.1002-137X.2017.01.002
Abstract PDF(1164KB) ( 604 )   
References | Related Articles | Metrics
Monitoring plays a significant role in ensuring the availability of cloud computing,because of the complex architecture of cloud and the unpredictability of workload.It provides the real-time monitoring data,the status of the running system,and helps to discover the defects and optimize system performance.However,due to the heterogeneity of cloud,the resources on cloud are often dynamic,diverse and enormous scale,which bring about a great challenge for cloud monitoring.To solve this problem,we proposed a scalable and hierarchical monitoring architecture for clouds (SHMA).SHMA uses microservices to create each independent component of the scalable monitoring system.It realizes the goal of offering cloud providers and cloud consumers to have an overview of the resources on three levels of clouds,including the runtime information of application services,middleware and infrastructure.Finally,we used a case of comprehensive disaster reduction system on CloudStack to evaluate the effectiveness of the architecture.
I/O Scheduling Algorithm Based on Dynamic Prioritization in Virtual Machines
GUO Song-hui, GONG Xue-rong, WANG Wei, LI Qing-bao and SUN Lei
Computer Science. 2017, 44 (1): 13-19.  doi:10.11896/j.issn.1002-137X.2017.01.003
Abstract PDF(645KB) ( 642 )   
References | Related Articles | Metrics
I/O scheduling is a key factor for the performance of I/O-intensive virtual machine.The existing scheduling methods mainly focus on optimizing the global I/O bandwidth of virtual machine,and few taking the performance of each virtual domain and the entire system synchronously into account.According to the deficiencies of the existing methods,I/O scheduling algorithm DPS was proposed based on dynamic prioritization.The algorithm takes advantage of multi-attribute decision making theory to calculate attribute weights dynamically for assessing I/O task priority comprehensively,and effectively characterize virtual domain by its value.The performance of DPS is evaluated by scheduling virtualized NIC in Xen.The results show that DPS improves the deadline guarantee ratios of specified domain and the entire system,improves global I/O bandwidth,and it can provide differentiated services for each virtual domain on demand.
Evaluating Intel AVX2 Vgather Instructions with Stencils
LIN Xin-hua, QIN Qiang, LI Shuo, WEN Min-hua and MATSUOKA Satoshi
Computer Science. 2017, 44 (1): 20-24.  doi:10.11896/j.issn.1002-137X.2017.01.004
Abstract PDF(3704KB) ( 980 )   
References | Related Articles | Metrics
Intel provided AVX2 vgather instruction on Haswell CPU to better support reading discontinued data in vectorization.We found the compiler generates vgather instructions,which slow down the performance of Stencil on Haswell,because the branches exist in defining boundary condition of Stencils.We proposed to utilize peel optimization or intrinsic load to avoid these vgather instructions.We applied these optimizations to three Stencil benchmarks,a long-range Stencil 3DFD,and a hybrid Stencil application,and archived the speedup from 1.22X to 3.88X on Haswell.By ana-lyzing the implementation of the instruction,we found the vgather instructions are decoded into multiple micro-operations (μops),and the instructions generate one μops for each element to be gathered.Due to the high overhead of deco-der,the vgather instructions become the performance bottleneck of Stencils on Haswell.It is believed that the understanding of the implementation of AVX2 vgather instructions and adopting the optimizations to avoid the vgather instructions are quite helpful for performance tuning the applications with good spatial locality on Haswell.
Rough Set Attribute Reduction Algorithm for Partially Labeled Data
ZHANG Wei, MIAO Duo-qian, GAO Can and LI Feng
Computer Science. 2017, 44 (1): 25-31.  doi:10.11896/j.issn.1002-137X.2017.01.005
Abstract PDF(584KB) ( 513 )   
References | Related Articles | Metrics
Attribute reduction,as an important preprocessing step for knowledge acquiring in data mining,is one of the key issues in rough set theory.Rough set theory is an effective supervised learning model for labeled data.However,attribute reduction for partially labeled data is outside the realm of traditional rough set theory.In this paper,a rough set attribute reduction algorithm for partially labeled data was proposed based on co-training which capitalizes on the unlabeled data to improve the quality of attribute reducts from few labeled data.It gets two diverse reducts of the labeled data,employs them to train its base classifiers,and then co-trains the two base classifiers iteratively.In every round,the base classifiers learn from each other on the unlabeled data and enlarge the labeled data,so better quality reducts could be computed from the enlarged labeled data and employed to construct base classifiers of higher performance.The theoretical analysis and experimental results with UCI data sets show that the proposed algorithm can select a few attributes but keep classification power.
Depth Estimation from Single Defocused Image Based on Gaussian-Cauchy Mixed Model
XUE Song and WANG Wen-jian
Computer Science. 2017, 44 (1): 32-36.  doi:10.11896/j.issn.1002-137X.2017.01.006
Abstract PDF(1892KB) ( 801 )   
References | Related Articles | Metrics
Recovering the 3D depth of a scene from a single image is a difficult problem in the field of computer vision.Most methods for depth estimation from a single defocused image construct the point spread function by an 2D Gaussian or Cauchy distribution.However,reasons of blurred images in the real world are varied,so a simple Gaussian or Cauchy distribution function may be not the best approximation model.They are often influenced by noise and inaccurate edge location,and then a high quality depth estimation may be difficult to achieve.A Gaussian-Cauchy mixed distribution model was presented in this paper to re-blur the given defocused image,and two different degree blurred images were then obtained.We estimated the sparse depth map generated from the gradients ratio at edge locations by the two blurred images.In so doing,a full depth map can be recovered by matting Laplacian interpolation.Experimental results on some real images demonstrate that the proposed approach is effective and better than the two commonly used approaches.
Online Sequential Active Learning Approach
ZHAI Jun-hai, ZANG Li-guang and ZHANG Su-fang
Computer Science. 2017, 44 (1): 37-41.  doi:10.11896/j.issn.1002-137X.2017.01.007
Abstract PDF(482KB) ( 725 )   
References | Related Articles | Metrics
In the real world,there are a lot of unlabelled data,such as various medical images and web data,etc.In the era of big data,this situation is more prominent.It is expensive to label large amount of unlabelled data.Active learning is an effective method to solve this problem,and it is one of the hot research topics in the field of machine learning and data mining.Based on online sequential extreme learning machine,an active learning algorithm was proposed in this paper.Due to the nature of incremental learning embedded in online sequential extreme learning machine,the proposed algorithm can significantly improve the efficiency of learning system.Furthermore,the proposed algorithm uses instance entropy as heuristic to measure the importance of the unlabeled instances,and uses K-nearest neighbor classifier as Oracle to label the selected instances.The experimental results show that the proposed algorithm has fast learning speed with exact labeling.
Automatic Construction and Optimization of Sentiment Lexicon Based on Word2Vec
YANG Xiao-ping, ZHANG Zhong-xia, WANG Liang, ZHANG Yong-jun, MA Qi-feng, WU Jia-nan and ZHANG Yue
Computer Science. 2017, 44 (1): 42-47.  doi:10.11896/j.issn.1002-137X.2017.01.008
Abstract PDF(612KB) ( 862 )   
References | Related Articles | Metrics
The construction of sentiment lexicon plays an important role in text mining.In recent years,the lexicon annotating format gradually evolves from binary annotation to multiple annotation,and sentiment lexicons of a single specific domain have caught more and more attentions of researchers.However,manual annotation costs too much labor work and time,and it is also difficult to get accurate quantification of emotional intensity.Besides,the excessive emphasis on one specific field has greatly limited the applicability of domain sentiment lexicons[1].This paper implemented statistical training for large-scale Chinese corpus through neural network language model,and proposed an automatic me-thod of constructing a multidimensional sentiment lexicon based on constraints of Euclidean distance group.In order to distinguish the sentiment polarities of those words which may express either positive or negative meanings in different contexts,we further presented a sentiment disambiguation algorithm to increase the flexibility of our lexicon.Lastly,we presented a global optimization framework that provides a unified way to combine several human-annotated resources for learning our 10-dimensional sentiment lexicon SentiRuc.Experiments show the superior performance of SentiRuc lexicon in category labeling test,intensity labeling test and sentiment classification tasks.It is worth mentioning that in intensity label test,SentiRuc outperforms the second place by 23%.
Selective Ensemble Learning Algorithm Based on Hierarchical Selection and Dynamic Updating in Parallel
WU Mei-hong, GUO Jia-sheng, JU Ying, LIN Zi-yu and ZOU Quan
Computer Science. 2017, 44 (1): 48-52.  doi:10.11896/j.issn.1002-137X.2017.01.009
Abstract PDF(1127KB) ( 676 )   
References | Related Articles | Metrics
In this paper,a selective ensemble learning algorithm was proposed based on hierarchical selection and dynamic updating,which can optimize the parameters of classifier with multi-thread technique and select the sub sequence set of classifiers based on hierarchical selection and dynamical information.It can solve the problem in the past for choosing classifier to ensemble learning inefficiently.In addition,divide-and-conquer strategy is employed to reduce the time cost for ensemble voting.The big voting task can be divided recursively into small child task by dichotomy,then the tasks are executed in parallel and it would conquer the voting result.Experimental results show that the selective algorithm can outperform the traditional classification algorithms on F1-Measure and AUC.
Optimized Negotiation Model Based on Reinforcement Learning of Medium Agent
ZHANG Jing-min and DONG Hong-bin
Computer Science. 2017, 44 (1): 53-59.  doi:10.11896/j.issn.1002-137X.2017.01.010
Abstract PDF(597KB) ( 609 )   
References | Related Articles | Metrics
This paper proposed reinforcement learning bilateral optimized negotiation model based on reinforcement learning.The medium agent was introduced.It uses different parameters in the reinforcement learning strategy to produce proposals,and selects the best parameters to negotiate.The purpose is to further improve the performance of negotiation,and then the article presented the learning ability of adaptive based on medium agent.The simulation results show the effectiveness of the proposed method of negotiation and that it can improve the performance of negotiation.
Self-adaptation Multi-gram Weight Learning Strategy for Sentence Representation Based on Convolutional Neural Network
ZHANG Chun-yun, QIN Peng-da and YIN Yi-long
Computer Science. 2017, 44 (1): 60-64.  doi:10.11896/j.issn.1002-137X.2017.01.011
Abstract PDF(1299KB) ( 725 )   
References | Related Articles | Metrics
Nowadays,with the explosive growth of the information,nature language processing has been paid more attention.The traditional nature language processing systems are overly dependent on the expensive handcrafted features annotated by experts and synatx information of language analysis tools.Deep neural network can achieve end-to-end learning even without costly features.In order to extract more information from input sentences,most neural networks of nature language processing combines with multi-gram strategy.However,due to various tasks or various datasets,the information distribution of diverse n-gram is different.With this consideration,this paper proposed a self-adaptation weight learning strategy of multi-gram,which generates the importance order of multi-gram by the training procedure of neural network.Moreover,a novel combination method of multi-gram feature vectors was exploited.Experimental results show that such method can not only reduce the complexity of network,but also can improve performances of positive and negative tendency classification of movie criticism,and relation classification.
Improved Multi-view Clustering Ensemble Algorithm
DENG Qiang, YANG Yan and WANG Hao
Computer Science. 2017, 44 (1): 65-70.  doi:10.11896/j.issn.1002-137X.2017.01.012
Abstract PDF(457KB) ( 1005 )   
References | Related Articles | Metrics
In recent years,data mining and machine learning algorithms for big data become increasingly important.In the clustering,with the appearance of multi-view data,multi-view clustering has become an important clustering method.However,many existing multi-view clustering algorithms are easily affected by parameter setting and dataset itself,so the clustering results are usually unstable.To overcome this problem,we presented a new multi-view clustering ensemble algorithm based on the multi-view K-means clustering algorithm in this paper.This algorithm uses ensemble technique to improve the multi-view K-means algorithm performance,increasing the accuracy,robustness,and stability of clustering results.It is well known that one single computer cannot process too much data,because one computer has the limited computation resources.To improve the efficiency of multi-view clustering,we implemented a distributed multi-view clustering ensemble algorithm based on distributed processing technology.Experimental results show that the proposed approach has higher efficiency when processing large dataset,and it is suitable for multi-view clustering in big data environment.
Improved Linear Influence Diffusion Model Based on Users’ Distance
CAI Guo-yong and PEI Guang-zhan
Computer Science. 2017, 44 (1): 71-74.  doi:10.11896/j.issn.1002-137X.2017.01.013
Abstract PDF(296KB) ( 523 )   
References | Related Articles | Metrics
The prediction of information diffusion based on historical behavior of users in on-line social network is one of the hot spots of current research.However,the traditional propagation models can only explain the diffusion regular pattern of information in social networks,and it cannot predict the dissemination of information.Since uninfected users will be affected by infected users,Jaewan Yang and Jwe Leskovec proposed a linear influence model (LIM),but LIM model only considers the time factor in the process of information dissemination,and it ignores the spatial information,namely the relationship of users.Therefore,firstly,a measure of users’ distance in social network was introduced in this paper.Combining the distance measure with LIM model,we proposed an improved LIM model based on distance regula-rization,namely d-LIM model.Through experiments on real data sets,the result shows that d-LIM model can achieve better prediction accuracy than the compared methods.
Triploid Individual Haplotype Reconstruction Algorithm Based on Enumeration Strategy
ZHANG Qian and WU Jing-li
Computer Science. 2017, 44 (1): 75-79.  doi:10.11896/j.issn.1002-137X.2017.01.014
Abstract PDF(486KB) ( 466 )   
References | Related Articles | Metrics
Determining triploid individual haplotype plays an important role in promoting the study of exploring genetic traits and phenotypic differences of triploid species.In this paper,based on the minimum error correction with genotype information (MEC/GI) model,an enumeration strategy based enumeration haplotyping triploid (EHTR) algorithm was proposed for solving triploid individual haplotype reconstruction problem.The EHTR algorithm reconstructs the SNP sites of the three haplotypes one after another.When reconstructing a given SNP site,it enumerates three kinds of SNP values in terms of the genotype of the site,and chooses one with the most high support degree coming from the SNP fragments that are covering the corresponding SNP site.The total time complexity is O(mn+mlogm+cnl).In the experiments,two kinds of simulators CELSIM and MetaSim were invoked to generate SNP fragments.The reconstruction rate and running time were compared and analyzed among algorithms EHTR,GTIHR,W-GA and Q-PSO with different parameters setting,such as fragment coverage,error rate,single fragment length,haplotype length and haplotype hamming distance.Under different parameter setting,the EHTR algorithm can obtain higher reconstruction rate in shorter running time,which is proved by a number of experiments.
Integrated Feature Mining Based Approach for Calling Genomic Deletions
ZHANG Xiao-dong, LING Cheng and GAO Jing-yang
Computer Science. 2017, 44 (1): 80-83.  doi:10.11896/j.issn.1002-137X.2017.01.015
Abstract PDF(1813KB) ( 779 )   
References | Related Articles | Metrics
With the application and development of next generation sequencing technology,methods of calling genomic deletions based on sequencing have proliferated.However,using a single method to call deletions has limitation in application and insufficiency of precision and sensitivity.To solve these problems,an integrated approach for calling deletions was proposed based on feature mining according to combining multiple theory and machine learning algorithm.First,different callers are used for calling deletions.These results are merged as aninitial result set of deletions.Then,according to variety of detection strategies,features of the initial result set of deletions are extracted based on next generation sequencing data.Finally,to obtain the final result set of calling deletions,a machine learning model is trained to distinguish false positive deletions from initial call set.The experimental results show that compared with a single caller such as Pindel and SVseq2,the proposed approach has higher precision and sensitivity simultaneously.Compared with directly merging multiple deletion call sets,the proposed approach can significantly improve the precision with slight loss of sensitivity.
Interval Parameters Optimization Model under Three-way Decisions Space and Its Application
LI Ming-xia, LIU Bao-xiang and ZHANG Chun-ying
Computer Science. 2017, 44 (1): 84-89.  doi:10.11896/j.issn.1002-137X.2017.01.016
Abstract PDF(461KB) ( 471 )   
References | Related Articles | Metrics
The interval concept lattice theory is a new method of mining objects based on interval parameters.It can more accurately deal with uncertain information.Interval parameters α and β can determine interval concepts and lattice structure,and then affect extracted decision rules.In order to solve the optimization problem of interval parameters,firstly we combined the theories of interval concept lattice and three-way decision-theoretic rough set,and then put forward three-way decision space theory.Secondly,according to the theory,the extension of interval concept was divided into positive region.Megative region and boundary region,moreover,the three-way decision rules and decision loss function were proposed.Through adjusting interval parameters,we can find more credible decision rules,and then optimize interval parameters.Finally we verified the model by an example.
Multigranulation Covering Rough Intuitionistic Fuzzy Sets Model Based on Minimal and Maximal Descriptions
XUE Zhan-ao, SI Xiao-meng, WANG Nan and ZHU Tai-long
Computer Science. 2017, 44 (1): 90-94.  doi:10.11896/j.issn.1002-137X.2017.01.017
Abstract PDF(363KB) ( 513 )   
References | Related Articles | Metrics
Covering rough sets and intuitionistic fuzzy sets,which have strong complementary,are the basic theories of dealing with uncertainty.It is a hot research topic to combine covering rough sets and intuitionistic fuzzy sets.In this paper,the combination of multigranularity covering rough sets and intuitionistic fuzzy sets was studied.Firstly,minimal and maximal descriptions,which are extended from single granulation to multigranulation,were proposed based on multigranulation,and the fusion of multigranularity was discussed.Secondly,the concept of fuzzy covering rough membership and non-membership was defined on minimal and maximal descriptions respectively.Then,two new models were structured,which are multigranulation covering rough intuitionistic fuzzy sets based on minimal description and multigranulation covering rough intuitionistic fuzzy sets based on maximal description,and their properties were discussed and illustrated with examples.Finally,the relationships of the two models were researched.This study provides a new method for the combination of multigranulation covering rough sets and intuitionistic fuzzy sets.
Research on Sense Guessing of Chinese Unknown Words Based on Knowledge Graph
ZHU Feng, GU Min, ZHENG Hao, GU Yan-hui, ZHOU Jun-sheng and QU Wei-guang
Computer Science. 2017, 44 (1): 95-99.  doi:10.11896/j.issn.1002-137X.2017.01.018
Abstract PDF(508KB) ( 498 )   
References | Related Articles | Metrics
Semantic study based on traditional corpus has lots of limits,such as updating infrequently and being language-related.To tackle such issues,sense guessing of Chinese unknown words based on knowledge graph(KG) was proposed in this paper.KG is a semantic network containing entities,concepts and semantic relations.It has a huge number of entities and relations and it is very convenient to add them into the KG,which makes it possible to fix the infrequent updating problem.After the introduction of the structure of knowledge graph,how to get data and ways to process them,some exploration about KG-based sense guessing of Chinese unknown words were excuted.At last,Bai-duBaike,which has the most abundant chinese-related data,is used as the corpus with traditional ones to do experiments that are particularly designed to use one specific sense guessing model.This paper also compared the results of experiments based on different knowledge bases and proposed some improvement work.
Image Segmentation Algorithm of Spectral Clustering Optimized by Genetic
QIN Xiao, LIANG Wei, YUAN Chang-an and TANG Tao
Computer Science. 2017, 44 (1): 100-102.  doi:10.11896/j.issn.1002-137X.2017.01.019
Abstract PDF(1007KB) ( 594 )   
References | Related Articles | Metrics
The traditional spectral clustering methods use k-means to achieve the final clustering.But k-means is sensitive to initial conditions and easily plunges into local optimum,which influence the effect of image segmentation with spectral clustering method.This paper proposed an image segmentation algorithm of spectral clustering optimized by genetic algorithm(ISCOG),using the GA instead of k-means in spectral clustering algorithm.The experiments on syntheticimages and real images show that ISCOG algorithm greatly improves the stability and clustering quality of the spectral clustering algorithm.
Zero-One Integer Programming Based Optimization Model and Two-phase Resource Optimization Algorithm for Wireless Ad hoc Networks
LIU Wei, ZHAO Yu and CHEN Rui
Computer Science. 2017, 44 (1): 103-108.  doi:10.11896/j.issn.1002-137X.2017.01.020
Abstract PDF(2014KB) ( 585 )   
References | Related Articles | Metrics
In order to solve the problem that the capability of wireless Ad Hoc networks will decrease with the increase of the number of nodes,the use of multi-radio multi-channel (MR-MC) to distribute resources and reduce the interfe-rence among nodes has become an important technical for the wireless network performance optimization.Therefore,we proposed a zero-one integer programming network optimization model and a tree-based channel assignment & link scheduling approach to make that the limited available channels can optimally work and ensure higher capacity increase in wireless Ad hoc networks.Then,we implemented the algorithms in Matlab7.0.The results show that our method performs much better than CCAS and the algorithm which only use channel assignment.
Load Balancing Routing Protocol Based on Traffic Prediction for Wireless Mesh Networks
LIU Yong-bo, LIU Nai-an, LI Xiao-hui and JI Qiong
Computer Science. 2017, 44 (1): 109-112.  doi:10.11896/j.issn.1002-137X.2017.01.021
Abstract PDF(320KB) ( 494 )   
References | Related Articles | Metrics
We proposed a load balancing routing protocol in the wireless mesh network based on neural network prediction model,namely NNP-L2MPM.In this protocol,the optimal next hop to the destination is selected by using the path quality which is calculated by HELLO packets flooded on the network.The traffic load is measured by the length of the queue at the interface in MAC layer.Then the RBF neural network model is used to forecast the traffic load in the nodes of the mesh network.According to the forecasted traffic load in the next time,path quality is optimized,routing is updated previously,congestion in the intermediate nodes is avoided,and finally network performance is improved.Compared with the original routing protocol,the results of simulation show that the rate of the packet delivery can increase about 9%,and the average end-to-end delay can reduce about 16%.
Optimization Selection Mechanism for Service Nodes in Hybrid Crowd Sensing
HE Xin, LIU Tian-xu, DING Shuang and BAI Lin
Computer Science. 2017, 44 (1): 113-116.  doi:10.11896/j.issn.1002-137X.2017.01.022
Abstract PDF(327KB) ( 516 )   
References | Related Articles | Metrics
The application of crowd sensing depends on the participation of mobile users.The sensing ability of the user is subject to their movement pattern,the remaining resources of the portable device,and other factors.Existing research work on service nodes selection is relatively simple,therefore it is necessary to design an optimization selection mechanism for selecting optimal service nodes set,ensuring the sensing quality of the target area.Based on the movement characteristics of the user,we proposed and defined the service metrics.Then,we designed the optimization selection mechanism using the genetic algorithm.The simulation results show that optimization selection mechanism can effectively select the best service nodes set,and then improve the service quality of the hybrid crowd sensing.
Energy Consumption Optimization Strategy for Data Transmission Based on Data Arrival Rate in Mobile Networks
PENG Ying, WANG Gao-cai and WANG Nao
Computer Science. 2017, 44 (1): 117-122.  doi:10.11896/j.issn.1002-137X.2017.01.023
Abstract PDF(474KB) ( 462 )   
References | Related Articles | Metrics
Energy consumption of data transmission is a significant part of energy consumption in mobile networks.Ener-gy efficient transmission is an important topic to optimize energy consumption in mobile networks.The average energy consumption minimization problem based on data arrival rate was studied,when data has delay demand.The minimization problem was constructed using time-varying characteristics of wireless channel quality and it was turned into an optimal stopping problem.The existence of optimal stopping rule was proved.Finally,the optimal myopic stopping rule was solved to obtain optimal transmission rate threshold at each detection time.Thus the energy consumption optimization strategy for data transmission based on data arrival rate was realized.Simulations compared this strategy with othersin the index of average energy consumption,average delivery ratio and average scheduling period.The results show that our strategy has less average energy consumption and higher average delivery ratio,gaining better optimization effect.
SDN Optimization Algorithm Based on Prediction and Dynamic Load Factor
SHI Shao-ping, ZHUANG Lei and YANG Si-jin
Computer Science. 2017, 44 (1): 123-127.  doi:10.11896/j.issn.1002-137X.2017.01.024
Abstract PDF(1181KB) ( 631 )   
References | Related Articles | Metrics
This paper studied the adjustment of flow table in the software defined networking (SDN).A SDN optimization of the flow table algorithm (SOOTFTA) based on prediction and dynamic load factor was proposed to address the problem that switch do not have enough space in the flow table for newly arrived flows.Firstly,the SOOTFTA collects the various newly arrived flows during per unit of time.Based on the information,the SOOTFTA estimates the number of newly arrived flows in the next time unit by using the second moving average method (SMA).Finally,the SOOTFTA dynamically adjust the idle timeout of the flows in the table based on the dynamic load factor.Experimental result show that the flow table matching rate and data forwarding rate are improved by using the proposed SOOTFTA,increasing the number of activities in the flow table.
Frequent Pattern Mining in Bit Stream Based on AC Algorithm
LEI Dong, WANG Tao and MA Yun-fei
Computer Science. 2017, 44 (1): 128-133.  doi:10.11896/j.issn.1002-137X.2017.01.025
Abstract PDF(525KB) ( 530 )   
References | Related Articles | Metrics
The existing method of frequent pattern mining in bit stream is inefficient and the precision of the results is low under the influence of redundant data.In order to solve the problem,it was proved that the mining of short frequent pattern has great limitations.According to the different features when the patterns are mined with different lengths,the frequent sequences are divided into two types:long frequent pattern and short frequent pattern.An algorithm of finding the header fields of the protocol in bit stream was proposed,and the efficient algorithm of mining the long frequent pattern and the short frequent pattern were proposed based on AC multi-pattern matching algorithm.Simulation results on the Ethernet show that the proposed algorithm is effective.
Clustering Routing Algorithm for Heterogeneous Wireless Sensor Networks with Self-supplying Nodes
XU Xin-li, LV Qi, WANG Wan-liang and HUANGFU Xiao-jie
Computer Science. 2017, 44 (1): 134-139.  doi:10.11896/j.issn.1002-137X.2017.01.026
Abstract PDF(528KB) ( 503 )   
References | Related Articles | Metrics
Aiming at the problem of short network lifetime and unbalanced energy consumption in existing clustering routing algorithms,this paper presented a new clustering routing algorithm for heterogeneous wireless sensor networks with energy self-supplying nodes.Considering that energy supply is not stable in actual environment,an energy-balanced cluster head election mechanism and a multi-hop inter-cluster routing were designed in heterogeneous wireless sensor networks according to the residual energy and current energy supply states of nodes.Simulation results show that the proposed algorithm is more effective to extend the network life cycle and balance the energy consumption of whole network than the traditional clustering routing algorithms (LEACH and SEP) with the same energy replenishment mechanism and other clustering routing algorithms based on energy harvesting (PHC and EBCS).
Minimal Energy Transmitters Placement Approaches for RF-energy Harvesting Heterogeneous Wireless Sensor Networks
CHI Kai-kai, ZHU Liu-shuan, CHENG Zhen and TIAN Xian-zhong
Computer Science. 2017, 44 (1): 140-144.  doi:10.11896/j.issn.1002-137X.2017.01.027
Abstract PDF(421KB) ( 501 )   
References | Related Articles | Metrics
The applications of battery-powered wireless sensor networks are greatly restricted by the inconvenient or even impossible battery replacement.This paper considered the RF-energy harvesting heterogeneous wireless sensor networks where different sensor nodes may have different requirements on the power output of energy harvesting,and studied how to place the energy transmitters (ETs) so that the power output requirements of all nodes are satisfied and the number of ETs is minimized at the same time.This paper first formulated this ETs placement problem so as to deeply and theoretically understand this problem,and then presented a low-complexity greedy scheme and a particle swarm optimization (PSO)-based scheme with relatively high complexity.Simulation results demonstrate that,compared to the greedy scheme,the PSO-based scheme is able to slightly reduce the average number of ETs.However,as the PSO-based scheme is with relatively high complexity,it can be used for the scenarios with not many nodes,whereas the greedy scheme can be used for the scenarios with a large number of nodes.
Analysis on Fitting Model of Network Covert Timing Channel
YANG Peng, ZHAO Hui and BAO Zhong-gui
Computer Science. 2017, 44 (1): 145-148.  doi:10.11896/j.issn.1002-137X.2017.01.028
Abstract PDF(1142KB) ( 614 )   
References | Related Articles | Metrics
With the rapid development of computer network,the security of computer network has caused more and more peoples’ attention.Among lots of methods of network attack,network covert channels have become one of the main threats to the security of computers.Because of its undetectable nature and high data transmission rate,covert ti-ming channel has become one of current research hot spots in the field of information security.This paper constructed a model for the transmission process of network covert timing channel.The model describes how to encode and modulate covert messages using spreading code.On the basis,we analyzed the probability distribution of the constructed model,then made a more comprehensive contrast with the Poisson distribution which is used to fit the legitimate channel.Aiming at analyzing the concealment and data transmission rate of covert channels,we first analyzed the parameters which impact the above properties of covert timing channels,and also discussed the relationship between these properties,which has certain significance for the future work of network covert timing channels.
Multi-keyword Ranked Search Method Based on B+ Tree
NA Hai-yang, YANG Geng and SHU Xiao-wei
Computer Science. 2017, 44 (1): 149-154.  doi:10.11896/j.issn.1002-137X.2017.01.029
Abstract PDF(1285KB) ( 652 )   
References | Related Articles | Metrics
For the large amount of information and the storage of encrypted privacy information in society,it has become more difficult to retrieve these information for users.Based on research and analysis of the existing searchable encryption scheme,a method of multi-keyword ranked search based on B+ tree was proposed in this paper.Specifically,the vector model is combined in the index construction and trapdoor generation,and the search results are sorted according to the relevance score and the match number of keyword.Finally,the experiments are conducted on the really dataset to demonstrate the search efficiency of the proposed scheme.
Cross-domain PKI-based Key Agreement Protocol
WEI Zhen-yu, LU Xiang and SHI Ting-jun
Computer Science. 2017, 44 (1): 155-158.  doi:10.11896/j.issn.1002-137X.2017.01.030
Abstract PDF(458KB) ( 635 )   
References | Related Articles | Metrics
It has been proven that security risks exist in most of the password-based cross-domain authentication and key agreement protocols or Kerberos protocol.It is necessary to propose a more effective protocol to ensure the communi-cating security in the area of finance and aerospace,which require high level communicating security.This paper proposed a cross-domain PKI-based key agreement protocol.This protocol can efficiently solve the key exposure problem in which the password guessing and man-in-the-middle attack is enabled.This problem is resulted from using share-key encryption and decryption to assure the security of data transmission or Kerberos weak passwords.To solve this pro-blem,this protocol adopts the public key algorithm and uses the Diffie-Hellman protocol to create the session key.Meanwhile,this protocol makes users get rid of repetitive configuration of the cross-domain server public key information,which reduces the complexity of the configuration and the key management between users and servers.Besides,this protocol improves the ability to identify authenticity and the information confidentiality,and is immune to multiple attacking ways.This protocol also has forward security and good expansibility.
Defense Technology Based on Dynamic Space-Time Performance for Flooding Attacks in Mobile Ad Hoc Networks
WANG Wei, WANG Jia-jun, WANG Ming-ming, ZHANG Wen-jing and CHEN Jin-guang
Computer Science. 2017, 44 (1): 159-166.  doi:10.11896/j.issn.1002-137X.2017.01.031
Abstract PDF(1299KB) ( 526 )   
References | Related Articles | Metrics
Flooding attacks is a kind of seriously harmful DOS attack in mobile Ad Hoc networks.However,the existing researches on security defense for flooding attacks are almost unfit for the characteristics (such as limited resource,dynamic topology) in Ad Hoc networks,and couldn’t keep the balance between network performance and network security.On the basis of analysis of the inherent relations among space-time dynamic properties,network performance evaluation and security threatens,a defense technology based on performance evaluation for Flooding attacks in mobile Ad Hoc networks was presented.With the measurable system evaluation indexes for security threaten,defense income and cost,the mechanism of making defense policies and optimizing defense performance is achieved in the proposed system.Simulation results show that the proposed defense technology can overcome a good many drawbacks in the existing security technologies for mobile Ad Hoc networks.Consequently,the proposed technology can meet the network pro-perties and actual application of mobile Ad Hoc networks.
Improved Attribute-based Encryption Scheme
SONG Wen-na, XIANG Guang-li, LI An-kang, ZHANG Yue-xin and TAO Ran
Computer Science. 2017, 44 (1): 167-171.  doi:10.11896/j.issn.1002-137X.2017.01.032
Abstract PDF(487KB) ( 760 )   
References | Related Articles | Metrics
Attribute-based encryption is suitable for one-to-many broadcast encryption environment,and is easy to implement fine-grained access control,protecting the user’s privacy well.This paper summarized the development present situation of the attribute-based encryption.Through the analysis of the security assumption of Waters scheme,Eq-BDHE was presented with its the random parameters satisfying certain specific relation.The improved CP-ABE encryption scheme was implemented.The security analysis and comparative experiments show that the new scheme has better security,reduces the number of system parameters,and improves the efficiency of encryption and decryption operations.
Improved RFID Key Wireless Generation Algorithm Based on Tag Part ID
HUANG Qi, LING Jie and HE Xiao-tao
Computer Science. 2017, 44 (1): 172-175.  doi:10.11896/j.issn.1002-137X.2017.01.033
Abstract PDF(349KB) ( 571 )   
References | Related Articles | Metrics
Aiming at the security problem that the initial key is easy to leak in the wireless radio frequency identification system,a RFID key wireless generation algorithm based on tag part ID was proposed.Before the tag and reader authentication,the sharing key is generated by the part of label ID and the random number reader generated XOR.Security analysis shows that the proposed algorithm can effectively resist replay attacks,man-in-the-middle attacks,desynchronization attacks and other active attacks,as well as passive attacks,with high security and low cost advantages.
Optimization of RSA Encryption Algorithm for Android Mobile Phone and Design of QR Code Encryption Security System
FANG Wen-he, LI Guo-he, WU Wei-jiang, HONG Yun-feng and ZHOU Xiao-ming
Computer Science. 2017, 44 (1): 176-182.  doi:10.11896/j.issn.1002-137X.2017.01.034
Abstract PDF(1878KB) ( 660 )   
References | Related Articles | Metrics
Based on the Android intelligent mobile terminal,the mobile two-dimensional code encryption security system was designed.The encryption module is based on RSA algorithm.To improve the running efficiency of RSA algorithm in the mobile terminal,the Monte Carb probabilistic algorithm is combined with Miller-Rabin prime test optimization strategies to get rapid random strong prime algorithm to improve the RSA algorithm efficiency of initialization and encryption,and MMRC decryption algorithm was used to optimize RSA decryption process.In addition,the M-ary algorithm was introduced to optimize the modular exponentiation carried out during the RSA algorithm.By realizing the above three aspects of optimazation,the 200 comparison test results show the improved RSA algorithm has improved significantly in the Android cryptographic security module than the original RSA algorithm.
New Ultra-lightweight RFID Authentication Protocol
ZHANG Ya-li, GUO Ya-jun, CUI Jian-qun and ZENG Qing-jang
Computer Science. 2017, 44 (1): 183-187.  doi:10.11896/j.issn.1002-137X.2017.01.035
Abstract PDF(412KB) ( 482 )   
References | Related Articles | Metrics
RFID (Radio Frequency Identification) technology is widely applied in many fields of life and production,such as access control equipment,payment equipment and others in wireless communication way.However,the wireless communication environment between the reader and the tag makes the RFID device face more malicious attacks and security threats.Because the low-cost tag only has very limited computing power and storage space,the common block cipher and hash function cannot be used for low cost tag.To solve the security problem of the low cost tag,this paper proposed a new ultra-lightweight RFID authentication protocol—SIUAP by using the bit operation code.Based on the lightweight wheel function F(x) and nonlinear function MIXBITS operations,which belong to SIMON algorithm,SIUAP protocol adopts three simple bit operations:Bits AND operation,XOR and cyclic shift operations,which greatly reduces the computational complexity.Through formally analyzing the protocol by GNY logic,it has been proved that the SIUAP protocol can realize the authentication of the reader and tag.Meanwhile,a security analysis of the SIUAP was also given.Compared with the existing ultra-lightweight authentication protocol,the SIUAP protocol has lower computational cost,which can meet the requirements of the RFID system of low cost and high security.
New Image Encryption Algorithm Based on New Four-dimensional Discrete-time Chaotic Map
ZHU Shu-qin, LI Jun-qing and GE Guang-ying
Computer Science. 2017, 44 (1): 188-193.  doi:10.11896/j.issn.1002-137X.2017.01.036
Abstract PDF(2986KB) ( 532 )   
References | Related Articles | Metrics
A novel four-dimensional discrete-time chaotic system was constructed on the basis of a modified version of Marotto’s theorem.On the basis of this system,a digital image encryption scheme was proposed,in which the 256-bit hash value of plain image is used to generate the initial value of the chaotic sequence.Therefore,the encryption key ge-nerated by the chaotic sequence is related with plain image to enhance the security of the encryption system.Theoretical analysis and simulation experiments show that the key space of the image encryption scheme is as large as 3.4×10100.The histogram of the encrypted image is close to the uniform distribution.The correlation of pixels is eliminated.The information entropy of encrypted image is close to 8 bit and the encrypted image has no obvious statistical information.The scheme is extremely sensitive to perturbations of the initial conditions of the chaos system.Any perturbations which are larger than 10-15 will make corresponding decryptions impossible.The encrypted image is also very sensitive to the plain image and can resist differential attack.
New Method for Computing Eventual Linear Ranking Functions
ZHU Guang, LI Yi and WU Wen-yuan
Computer Science. 2017, 44 (1): 194-198.  doi:10.11896/j.issn.1002-137X.2017.01.037
Abstract PDF(460KB) ( 450 )   
References | Related Articles | Metrics
Termination of linear loop programs has received extensive attention in these years.In this paper,we focused on the termination of linear constraint loops which has no traditional linear ranking functions.A method of detecting eventual linear ranking functions by Bagnara were introduced for termination analysis first.Such an eventual linear ranking function implies the loop terminates.We presented a new method for computing eventual linear ranking functions.Semi-algebraic systems,equivalent to linear increasing functions and eventual linear ranking functions,were established.Several methods for synthesizing eventual linear ranking functions were compared.And experimental results show that the semi-algebraic system for synthesis of eventual linear ranking functions established by our method is simpler.
Semantics Computational Model of Formal Languages Based on Monads
MIAO De-cheng, XI Jian-qing and SU Jin-dian
Computer Science. 2017, 44 (1): 199-202.  doi:10.11896/j.issn.1002-137X.2017.01.038
Abstract PDF(410KB) ( 680 )   
References | Related Articles | Metrics
Traditional semantics modelling methods of formal languages have some drawbacks to interpret semantics and describe rule,and this paper explored semantics computation of formal languages by monads which is categorical method.It firstly constructed Kleisli category by monads,presented a semantics computational model in the formal framework of Kleisli category,and then applied it by example.Compared with traditional semantics modelling methods,the semantics computational model presented by this paper is universal and has more strong abilities of semantics interpreting and rule descripting.
Research on BPEL Test Sequence Generation for Web Services Combination
ZHANG Ya
Computer Science. 2017, 44 (1): 203-207.  doi:10.11896/j.issn.1002-137X.2017.01.039
Abstract PDF(1115KB) ( 495 )   
References | Related Articles | Metrics
A kind of concurrent-based mode of control flow was proposed for the analysis and verification of interactions of Web services composition,which contains many complex concurrent behaviors using BPEL language.A formal for concurrent control flow and an efficient algorithm were also presented to generate BPEL testing sequence.First,we discussed many possible situations in Web services process by analyzing BPEL files,and translated the BPEL process source code into concurrent flow diagram to simplify the model.Then,the algorithm of executive paths of services composition process was discussed,and the algorithm can find out the sum of all executive paths and node passed by the paths.Finally,an example of composite service was given to proof the usability.The algorithm is the basis of full-scale test of Web services composition process,the research and implement of web services composition.
SQL Energy Consumption Forecasting Model Based on Database Load Status
GUO Bing-lei, YU Jiong, LIAO Bin and YANG De-xian
Computer Science. 2017, 44 (1): 208-213.  doi:10.11896/j.issn.1002-137X.2017.01.040
Abstract PDF(540KB) ( 846 )   
References | Related Articles | Metrics
In a typical database server,performance (throughput or response time) is the first-class optimization goal.However,energy consumption of database systems is ignored by the service providers and users,which makes the high energy consumption be a serious problem in data centers in the processing of chasing performance.Building energy consumption model for query workload is the first step to create a green database.By quantifying the system resources (CPU and Disk) consumed by query workload and transforming the time cost and energy cost into two independent models (time estimation model and power estimation model),the energy estimation model with uniform resource unit was implemented in a single-site database server.Using the multiple linear regression method to compute the key parameters of the above models,experimental results prove the feasibility of our model.To further prove the accuracy and efficiency of the above model,we also made it work under two different system settings (static system and dynamic system),making it more suitable for building the energy-aware green database.
Query Expansion Method Based on Ontology and Local Co-occurrence
WANG Xu-yang and WEI Xing-xing
Computer Science. 2017, 44 (1): 214-218.  doi:10.11896/j.issn.1002-137X.2017.01.041
Abstract PDF(403KB) ( 658 )   
References | Related Articles | Metrics
Combining the semantic expansion and the statistics expansion,a method of query expansion based on ontologyand local co-occurrence was proposed.The set of semantics candidate expansion concepts and the set of statistics candidate expansion concepts are respectively obtained by ontology and local co-occurrence.Then the final query expansion concepts are got by secondary filter.The method was presented to calculate the weight of expansion words.The results show that the query after expanding can reflect the user’s query better.In the design of semantic retrieval system,the method can effectively improve recall and precision.
Efficiency Optimization of Jaccard’s Similarity Coefficient Based on Two Dimensional Partition
LIAO Bin, ZHANG Tao, YU Jiong, GUO Bing-lei and LIU Ji
Computer Science. 2017, 44 (1): 219-225.  doi:10.11896/j.issn.1002-137X.2017.01.042
Abstract PDF(580KB) ( 558 )   
References | Related Articles | Metrics
With the exponential growth of Internet users and content,the efficiency of the Jaccard’s similarity coefficient algorithm under big data scenario is more important than ever before.In order to improve the efficiency of Jaccard’ssimilarity computing process,the implementation that the algorithm was analyzed under MapReduce architecture.Combining the characteristics of the Spark is more suitable for the iterative and interactive tasks,we transformed the algorithm from the MapReduce platform to Spark based on two dimensional partition algorithm.And we improved the efficiency of the algorithm by parameter adjustment,memory optimization and other.methods.With two data sets running on 3 clusters with different number of datanodes,the experimental results show that,compared with MapReduce,the algorithm execution efficiency under Spark platform improves more than 4 times,and energy efficiency improves more than 3 times.
Granularity of Rough Equivalence Class Based Incremental Attribute Core Computation Using Multiple Accelerating Strategies Pruning and Multiple Hashing
ZHAO Jie, ZHANG Kai-hang, DONG Zhen-ning, LIANG Jun-jie and XU Ke-fu
Computer Science. 2017, 44 (1): 226-234.  doi:10.11896/j.issn.1002-137X.2017.01.043
Abstract PDF(796KB) ( 448 )   
References | Related Articles | Metrics
A new incremental core computation algorithm was proposed.Firstly,rough equivalence class(REC) was proposed based on the smallest computational granularity of global equivalences,the character of REC was analyzed and core and reduction computation under REC was studied.Then relationship of core attributes and REC were studied and then an equal method of judging core attribution and incremental core computation method based on 0-REC were designed,through which multiple non-core attributions can be gained in one calculation.Based on which,bilateral pruning strategies were proposed to reduce calculation field of both attributes and entities,so it need not travel all the attributes and entities.The pruning strategies still work even there is no core.At last,20 decision sets of UCI,massive and ultra-high dimension were used to verify the strategies and algorithms.The results show that the algorithm is effective and efficient,and in most conditions,the algorithm of this paper is superior to the current algorithms,and fit for massive decision table especially.The algorithm can be the basis of new reduction and optimization algorithms.
Decomposition Strategy for Knowledge Tree of Predicate Based on Static Preconditions
BIAN Rui, WU Xiang-jun and CHEN Ai-xiang
Computer Science. 2017, 44 (1): 235-242.  doi:10.11896/j.issn.1002-137X.2017.01.044
Abstract PDF(720KB) ( 486 )   
References | Related Articles | Metrics
AI planning is essentially a search problem,usually using some strategies to reduce the search space,improving planning efficiency.In the planning method of taking the predicate as target,the efficiency of planning tree generation will affect planning efficiency directly.Therefore,this paper proposesd a decomposition strategy for knowledge tree of predicate based on the static preconditions,and gave the corresponding decomposition algorithm.For any domain,using the algorithm,knowledge tree of predicate can be decomposed into a number of smaller knowledge sub-trees.The use of knowledge sub-trees in planning process can reduce the search space effectively,thus generating plan tree quickly and improving the planning efficiency.At the same time,the domain knowledge is extracted from the domain with them.Finally,the experiment results show that decomposition algorithm is efficient.
Sparse Controllable Principal Component Analysis Method
TAN Ya-fang, LIU Juan, WANG Cai-hua and JIANG Wan-wei
Computer Science. 2017, 44 (1): 243-246.  doi:10.11896/j.issn.1002-137X.2017.01.045
Abstract PDF(369KB) ( 755 )   
References | Related Articles | Metrics
Principal component analysis (PCA) is a multivariate statistical analysis method which chooses a few important variables (dimension reduction) by linear transformation.PCA is widely used in scientific researches and enginee-ring,however,the results can sometimes be difficult to interpret.Therefore,some researchers introduced sparse penalties (lasso,fused lasso and adaptive lasso etc.) to obtain interpretable results.Since the traditional sparse penalty is not easy to control,we presented a novel penalty,namely sparse controllable penalty (SCP),to control the sparsity of principal components.Compared with the traditional penalties,SCP is scale insensitive,dimension insensitive and bounded between 0 and 1.It is easy to adjust the super parameter to control sparseness.Experimental results demonstrate that sparse controllable principal component analysis (SCPCA) is efficient.
Similarity Measure Algorithm of Time Series Based on Binary-dividing SAX
ZHANG Jian-hui, WANG Hui-qing, SUN Hong-wei, GUO Zhi-rong and BAI Ying-ying
Computer Science. 2017, 44 (1): 247-252.  doi:10.11896/j.issn.1002-137X.2017.01.046
Abstract PDF(1050KB) ( 847 )   
References | Related Articles | Metrics
Time series dimentionality reduction technology is used to resolve high dimensionality time series.Symbolic aggregate appro ximation (SAX representation) is a time series dimensionality reduction technique which benefits from its brief representation in dimensionality reduction and highperformance lower bound distance algorithm,but there is a question that the number of segments,a parameter in SAX,is set artificially based on the characteristic of individual time series.To solve this problem,similarity measure algorithm of time series based on binarydividing SAX was presented by introducing sliding window and statistical methods.The experimental results show that binarydividing SAX algorithm not only solves the difficulty to choose the number of segments,but also reduces the complexity of time series representation in dimensionality reduction and improves classification accuracy by using the SAX algorithm in a variety of time series data.
Hybrid Multi-objective Particle Swarm Optimization Based on Intuitionistic Fuzzy Dominance
MEI Hai-tao, HUA Ji-xue, WANG Yi and WEN Tong
Computer Science. 2017, 44 (1): 253-258.  doi:10.11896/j.issn.1002-137X.2017.01.047
Abstract PDF(955KB) ( 554 )   
References | Related Articles | Metrics
To improve the precision and distribute uniformity of multi-objective optimization problem,a hybrid multi-objective particle swarm optimization based on intuitionistic fuzzy dominance (IFDHPSO) was proposed.By introducing the scalar factor,this paper utilizeed intuitionistic fuzzy membership and rank method to define a new dominance strategy.Then,we proposed a meta-lamarckian local learning strategy to reduce the probability of being trapped into the local optima and premature convergence,which is based on simulated annealing algorithm.Then,the PSO identical factor was defined to adjust inertia weight and acceleration operator adaptively.Furthermore,a decline disturbance strategy was proposed to disturb particle’s velocity.Finally,the simulation results comparing with other typical optimization algorithm shows that the proposed algorithm performs better in solution precision,uniformity and convergence.
Constraint-based Activity Clustering in Business Process Model Abstraction
WANG Nan and SUN Shanwu
Computer Science. 2017, 44 (1): 259-263.  doi:10.11896/j.issn.1002-137X.2017.01.048
Abstract PDF(529KB) ( 454 )   
References | Related Articles | Metrics
ion WANG Nan SUN Shan-wu (College of Management Science and Information Engineering,Jilin University of Finance and Economics,Changchun 130117,China)(Laboratory of Logistics Industry Economy and Intelligent Logistics,Jilin University of Finance and Economics,Changchun 130117,China) Abstract This paper interpreted activity aggregation of business process model abstraction as a problem of semi-supervised clustering.It chooses appropriate activity sets as initial clusters based on a heuristic method to improve the quality of abstraction.In order to satisfy the order-preserving requirement of the model transformation and the business semantic integrity of the subprocesses,the control flow is further considered when classifying an activity to a cluster (candidate subprocess).A constraint function was designed with two parts:semantic distance and control flow conflict.The first part computs semantic distance between activities and subprocesses by introducing virtual document to represent them.In the second part,according to four ordering relations of behavioral profiles,a function is designed to show the control flow conflict caused by activity classifying.The proposed method is applied to a process model repository,comparing to the traditional k-means based activity clustering,such as methods of randomly generating the initial clusters and only based on semantics distance measurement, the proposed method is more closely approximating the decisions of the involved modelers to cluster activities.
Research of Many-objective Evolutionary Algorithm Based on Alpha Dominance
LIN Meng-man, ZHOU Huan and WANG Li-ping
Computer Science. 2017, 44 (1): 264-270.  doi:10.11896/j.issn.1002-137X.2017.01.049
Abstract PDF(537KB) ( 724 )   
References | Related Articles | Metrics
The classic multiobjective evolutionary algorithms based on the Pareto dominance solve the problems with 2 to 3 objectives effectively.However,when dealing with many-objective problems,as the number of dominance resistant solutions is rapidly increasing owing to the increase of the objectives,the existed multiobjective algorithms lack of the selection pressure towards the Pareto front,and the optimization effect becomes bad.In this paper,we analyzed the influence of different alpha values,then provided strict Pareto layer,and selected the relatively sparse solution as candidate solutions in the same layer.At last,we proposed a new many-objective evolutionary algorithm based on alpha partial order and congestion distance sampling.The performance of the algorithms was evaluated by generation distances(GD),spacing(SP),highpervolume(HV) on the DTLZ problems.The experimental results show that the convergence of the algorithm improves greatly through eliminating the DRSs.Compared with the NSGA-II,MOEA/D and MOEA/D-DU,the overall quality of the solutions by the improved algorithms increases greatly.
Application of Nondeterministic Finite Automata in Braille Transcoding
ZHANG Ju-xiao
Computer Science. 2017, 44 (1): 271-276.  doi:10.11896/j.issn.1002-137X.2017.01.050
Abstract PDF(1118KB) ( 724 )   
References | Related Articles | Metrics
It is of great significance to conduct research on computer interactive technology for the blind.However,due to lack of international standard,braille fonts of different companies are incompatible with each other,and this has caused a lot of problems.If braille can be presented through Chinese characters point-location encoding,it could be freed from restrictions of braille fonts.The paper presented the transcoding process from braille to Chinese character point-location encoding through nondeterministic finite automata.Then,it is verified through the method of Reverse Order-Splitting-Sets method.It is tested that the transcoding accuracy rate reaches 100%.In this way,braille is computer independence,so that the blind could use computers in a more convenient manner.
Research and Application of Social Network Data Acquisition Technology
XU Yan-fei, LIU Yuan and WU Wen-peng
Computer Science. 2017, 44 (1): 277-282.  doi:10.11896/j.issn.1002-137X.2017.01.051
Abstract PDF(528KB) ( 796 )   
References | Related Articles | Metrics
With the rapid development of social networks,the study on it is also gradually deepening.Obviously,the acquisition of basic data of social networks has very important significance to the study.In this paper,aiming at the exis-ting data acquisition programs,according to the Sina authorization standards and the latest microblog encryption,the paper studied two kinds of acquisition programs.One obtains data through the API interface after the OAuth2.0 certification,and another crawls data through the Web crawler after being simulated by the RSA2 encryption.At the same time,it also studied the acquisition of the data by using the appropriate acquisition rules for the microblog.Three kinds of data acquisition programs are able to collect the data effectively and they have their own characteristics.According to the requirements of data acquisition,the fusion of different acquisition programs were proposed in this paper.Through the experimental study,the fusion strategy can quickly and efficiently obtain vast amount of data.
Description of Data Association Occurring in Related Sets and Specific Example
YAN Lin, RUAN Ning, YAN Shuo and GAO Wei
Computer Science. 2017, 44 (1): 283-288.  doi:10.11896/j.issn.1002-137X.2017.01.052
Abstract PDF(641KB) ( 510 )   
References | Related Articles | Metrics
In order to discuss the problem on the data association,a data set was divided into granules according to different levels,and each granule was assigned related set.This gives rise to a hierarchy structure called granulation tree which is associated with classes of related sets.Then,supported by two granulation trees based on the same data set,a definition of the data association was introduced,demonstrating a numerical representation of the data association occurring in related sets,and leading to a way of numerical description on it.The research on the topic brings about a condition that is equivalent to the definition.Also by using the condition and based on a specific example,some properties were investigated,which focuses on numerical analyses of some issues,such as the close degree of the data association,the granulation identity between data,the numerical comparison of data associations,and so on.Accordingly,the discussion on the specific example provides the basis of algorithm programming and shows the practical significance of the research.
High Resolution Remote Sensing Image Object Recognition Algorithm Based on SIFT and Non-parametric Bayes
WANG Jian, BAI He-xiang and LI De-yu
Computer Science. 2017, 44 (1): 289-294.  doi:10.11896/j.issn.1002-137X.2017.01.053
Abstract PDF(1232KB) ( 576 )   
References | Related Articles | Metrics
Object recognition has always been a significant problem in the field of remote sensing image processing.With the development of remote sensing technology,the high resolution remote sensing images carry plenty of scale invariant features which are highly correlated with each other,and the traditional recognition methods are difficult to adapt this development.Based on the SIFT(Scale-invariant Feature Transform)algorithm,a fast and accurate algorithm for ground object recognition was proposed,namely DBSIFT(Double Backward SIFT).This method constructs the new pyramid based on SIFT,uses DP(Dirichlet Process) to identify the similar features,and then segments them.Weighing upon the relationship between geometry and arithmetic,nine indexes were selected to evaluate the accuracy of segmentation.In the experiment,similar ground object can be identified accurately,and the segmentation result’s performance is well.The effectiveness of this method is further explained.
Multi-focus Image Fusion Algorithm Based on Compressed Sensing and Regional Characteristics
CAO Yi-qin, HE Ya-fei and HUANG Xiao-sheng
Computer Science. 2017, 44 (1): 295-299.  doi:10.11896/j.issn.1002-137X.2017.01.054
Abstract PDF(1137KB) ( 419 )   
References | Related Articles | Metrics
The traditional image fusion based on compressed sensing algorithm is to deal with the whole coefficients sparse,and low-frequency coefficients of wavelet decomposition is not sparse,resulting in the quality reduction of compression reconstruction.Besides,the traditional fusion rules are difficult and not comprehensive to extrat characteristic value of high frequency coefficient.To solve this problem, we dealt with high and low frequency coefficient which was decomposed by wavelet by adopting different fusion rules,and an improved fusion method based on high-frequency compressed sensing of regional characteristics was proposed.Among them,low-frequency coefficients fusion method used the regional variance of weighted and maximum absolute value.Firstly,the high-frequency coefficients by random measurement matrix has better restricted isometry property compression sampling.The observed value based on energy matching degree is used for different additive or weighted fusion,to fuse the characteristic information of high frequency sub bands in different directions.Then the orthogonal matching pursuit recovery algorithm is used to to reconstruct the signal of high-frequency part.Finally,the low-frequency and high-frequency information in invert wavelet transform are used for reconstructing the fusion image.Experimental results show that compared with the previous fusion method based on compressed sensing,the effect of the fused image is more clear,new algorithm both in subjective evaluation and objective evaluation index are conducive to the image signal reconstruction,and has good usability.
Algorithm of Accelerated Cracks Detection Based on Improved Percolation Model in Concrete Surface Image
QU Zhong, GUO yang and JU Fang-rong
Computer Science. 2017, 44 (1): 300-302.  doi:10.11896/j.issn.1002-137X.2017.01.055
Abstract PDF(889KB) ( 753 )   
References | Related Articles | Metrics
Due to concrete surface roughness,uneven illumination,shadows,complex background and other disruptive factors,the traditional concrete crack detection method based on image processing cannot accurately detect concrete cracks,especially unclear cracks and some tiny cracks.Crack detection method based on percolation model which fully considered the low brightness and slenderness features of cracks can accurately detect unclear and tiny cracks.But this method is time-consuming.In order to solve these problems,an improved algorithm of image crack inspection based on percolation model was proposed in the article,which can reduce processing time through reducing the number of percolated pixels.Experimental results show that the proposed algorithm in this article can significantly accelerate crack detection and maintain a high detection precision.
Improved HOG Face Feature Extraction Algorithm Based on Haar Characteristics
JIANG Zheng and CHENG Chun-ling
Computer Science. 2017, 44 (1): 303-307.  doi:10.11896/j.issn.1002-137X.2017.01.056
Abstract PDF(428KB) ( 634 )   
References | Related Articles | Metrics
Most of existing feature extraction algorithms are prone to be influenced by external factors such as illumination,which can lead to the decrease of face recognition rate.The robustness of histogram of oriented gradient (HOG) can solve the problem that brought by illumination on face recognition rate.However,when calculating the gradient direction and amplitude of pixels,the traditional HOG algorithm considers only the impact of the four pixels situated in horizontal and vertical direction.The gradient direction and amplitude of pixels may become unstable when the external environment changes.Thus,we proposed an improved HOG face feature extraction algorithm based on Haar characteris-tics.When calculating the gradient direction and amplitude,we considered the influence of 8 pixels.Meanwhile,because of the simple and fast operating of Haar-like features,we inducted Haar into HOG.We showed four groups of Haar feature encoding models,which calculated the texture features of face according to the improved HOG.In our experiments we used FERET and Yale B datasets.Experiments demonstrate that,compared with existing algorithms,the proposed method has better robustness and improve the recognition rate under varying illumination conditions.On the fb,fc,dup1 and dup2 datasets,the recognition rates of the proposed method are 95.1%,80.9%,70.1% and 63.2% respectively.On the Yale B datasets,its rate is 89.1%.
Ship Trajectory Tracking Based on Binocular Vision
HUANG Ye, HUANG Jing, XIAO Chang-shi, JIANG Wen and SUN Yi
Computer Science. 2017, 44 (1): 308-313.  doi:10.11896/j.issn.1002-137X.2017.01.057
Abstract PDF(1240KB) ( 806 )   
References | Related Articles | Metrics
Binocular vision model can achieve the target distance measurement by simulating the human eye.In order to obtain real-time motion state of sea vessel,this paper proposed a ship trajectory tracking method based on binocular vision.Firstly,the method of the camera calibration and the linear space points in 3D reconstruction can measure the distance between the camera and the ship when taking the camera as the center,and calculate part of the trajectory of the ship.Secondly,this paper adopted the method of constant velocity(CV) model to establish the ship motion model.Finally,the strong tracking kalman filtering(STKF) method of ship trajectory tracking is used to track ship trajectory and estimate the motion state of target ship in real time for the established ship motion model.Experiments show that the ship trajectory tracking method based on binocular vision can track the ship trajectory effectively and is suitable for engineering applications.
Hippocampus Segmentation Based on Spare Coding and Orientation-Scale Descriptor
LIU Ying, ZHANG Ming-hui, YANG Wei, LU Zhen-tai, FENG Qian-jin and SU Yu-sheng
Computer Science. 2017, 44 (1): 314-320.  doi:10.11896/j.issn.1002-137X.2017.01.058
Abstract PDF(3268KB) ( 482 )   
References | Related Articles | Metrics
As hippocampus is associated with neurodegenerative diseases,a lot of researchers have proposed many me-thods to segment hippocampus,but the irregularity and the boundary vaguely make the high precise of hippocampus segmentation more difficulty.We proposed a novel algorithm called SCOSD to increase the accuracy of hippocampus segmentation.Motivated by abundant existing dictionary-based methods,SCOSD uses orientation-scale descriptor(OSD) to describe the pixel feature.The OSD contains not only intensity information,such as texture and gradient information,but also the geometrical information.The advantage of OSD is that it reduces the inhomogeneity among different images while containing several low-dimension features.SCOSD method has four steps.Firstly,the orientation-scale descriptors are extracted and dictionaries for each target voxel are constructed.Secondly,corresponding dictionary is used to represent the orientation-scale descriptor of the target voxel approximately and the sparse coefficients can be obtained simultaneous.Thirdly,the label and coefficients of the dictionary are fused to make voxels.Finally,threshold the fusion value to get the finally label.Experiments based on competition data of medical image computing and computer assisted intervention(MICCAI) demonstrate that SCOSD has higher segmentation precise than other algorithms such as Simple,Major Voting,Staple,Collate.