Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 42 Issue 8, 14 November 2018
  
Review of Human Exoskeleton Suit Technology
ZHANG Xiang-gang, QIN Kai-yu and SHI Yu-liang
Computer Science. 2015, 42 (8): 1-6. 
Abstract PDF(1069KB) ( 1320 )   
References | RelatedCitation | Metrics
The human exoskeleton suit is an automatic electro-mechanical system,which is worn on the wearer’s body to augment human strength and endurance,and extend human’s capabilities.Human exoskeleton suit is a complex integrated combination of mechanics,computer technology,control technology,micro-drive technology,materials technology and other disciplines.It can be widely used in the healthcare,medical care,rehabilitation of stroke patients,outdoor sports,emergency situations,disaster relief,and so on.Many significant scientific achievements and applications can be derived from the researches on it.This paper provided a review of the history and the latest developments of human exoskeletons suit.Then the overview of the architecture and working principle were introduced,and this paper ended with a discussion of technological challenges and future research directions which are critical to the field of exoskeletons.
Survey of Automatic Terminology Extraction Methodologies
YUAN Jin-song, ZHANG Xiao-ming and LI Zhou-jun
Computer Science. 2015, 42 (8): 7-12. 
Abstract PDF(522KB) ( 1099 )   
References | RelatedCitation | Metrics
Terminology extraction is a fundamental research work for text processing domain.The quality of ontology and accuracy of sematic retrieval can be improved by using a better automatic terminology extraction method.Firstly,the definition and characteristic of terminology,as well as the evaluation of terminology extraction were briefly introduced.Secondly,through a thorough analysis and summarization of literatures about automatic terminology extraction in recent twenty years,a comprehensive survey of state-of-the-art automatic terminology extraction methodologies was conducted,which includes domestic and international current research,their advantages and disadvantages and detailed descriptions of some classical algorithms.Finally,the trend of future study was discussed.
Green Computing and Green Embedded Systems
GUO Rong-zuo, GUO Jin and LI Ming
Computer Science. 2015, 42 (8): 13-21. 
Abstract PDF(874KB) ( 478 )   
References | RelatedCitation | Metrics
Green computing is an advanced computing technology,which aims to take advantage of advanced ideas,techniques and methods to reduce energy consumption of computing systems,thereby reducing the impact on people and the environment.Embedded systems now account for the vast majority of the entire computing systems,therefore,embedded systems also need green computing technologies to reduce energy consumption without affecting its performance.First,green computing research status was reviewed.Then the green embedded system definition and its connotation were discussed.Finally,the green appraisal of green embedded systems was discussed,and the contents of the green embedded systems research were carried out to explore.The main innovation of the article is the use of green computing ideas,and we put forward the concept of green embedded systems and studied related problems,pointed out the contents and direction of the green embedded systems to be studied.
Advances in Tag Ranking for Internet Social Images
WU Yan-zhang, LIU Hong-zhe, FENG Song-he, YUAN Jia-zheng and ZHANG Jing-yi
Computer Science. 2015, 42 (8): 22-27. 
Abstract PDF(1592KB) ( 448 )   
References | RelatedCitation | Metrics
Tag ranking for internet social images is one of the most popular topics in the computer vision and machine learning.The effect of image retrieval and other applications is directly affected by the reasonableness of the order of image tag.Currently,the existing methods on tag ranking are various.This paper divided them into relevance-based tag ranking and saliency-based tag ranking.This paper highlighted two typical image tag ranking ways and analyzes their advantages and disadvantages respectively.Finally,we discussed the evaluation methods and trends of image tag ranking simply.
Programming Factors about Efficiency of Parallel Program in Multi-core System and its Research
WANG Wen-yi and RAN Xiao-long
Computer Science. 2015, 42 (8): 28-31. 
Abstract PDF(396KB) ( 846 )   
References | RelatedCitation | Metrics
This paper emphatically analyzed the effects of the memory alignment technology,cache using ratio and such as these factors on parallel program performance in the multi-core architecture system.This paper used shared memory environment OpenMP to analyze the relationship between parallel amount of calculation and the number of the processor core.The results of experiment,which were programmed by MPI to implement the row-divided algorithm and CANNON algorithm of matrix multiplication,point out that only in the comprehensive consideration of the architecture feature of multi-core system,system software,multi-core programming language environment and the correct use of algorithm can we design a better parallel application—being capable of high efficiency and small energy consumption.
Improved Estimation of Distribution Algorithms Based on Normal Distribution
QIU Ling, GAO Shang and CAO Cun-gen
Computer Science. 2015, 42 (8): 32-35. 
Abstract PDF(316KB) ( 844 )   
References | RelatedCitation | Metrics
An improved estimation of distribution algorithm based on normal distribution was presented for function optimization in continuous space.The algorithm regards the selected individual as a normal distribution,and the random new populations of normal distribution are generated,and some selected individuals are crossed with the best solution.Compared with estimation of distribution algorithm based on uniform distribution and estimation of distribution algorithm based on normal distribution,the improved estimation of distribution algorithm based on normal distribution is more effective through result.At last,the influence of better population selection proportions was analyzed.
Approach to Monotonicity Attribute Reduction in Quantitative Rough Set
JU Heng-rong, YANG Xi-bei, QI Yong and YANG Jing-yu
Computer Science. 2015, 42 (8): 36-39. 
Abstract PDF(301KB) ( 475 )   
References | RelatedCitation | Metrics
It is well-known that the monotonicity plays an important role in attribute reduction of classical rough set.However,such property does not always hold in some generalization models,and quantitative rough set is a typical example.From this point of view,the definition of lower approximate monotonicity attribute reduction was presented in quantitative rough set model,and the heuristic approach was also given to compute the reduct.The experiment results show that compared with lower approximate preservation reduct,the lower approximate monotonicity can not only save the time consuming,but also increase the certainties which are expressed by positive and negative regions,and decrease the uncertainty coming from boundary region.
Variational Level Set Method for Image Segmentation Based on Improved Signed Pressure Force Function
CAO Jun-feng, WU Xiao-jun and CHEN Su-gen
Computer Science. 2015, 42 (8): 40-43. 
Abstract PDF(839KB) ( 488 )   
References | RelatedCitation | Metrics
In order to handle the problem of inaccurate moving of contour which results in the wrong segmentation of the image with weak boundary and intensity inhomogeneity,a variational level set method for image segmentation based on improved signed pressure force function combined with the statistical information of image was proposed in this paper.Firstly,a new model of active contours is constructed by using a new pressure sign function to replace the edge function.Secondly,the algorithm maintains the merits of geodesic active contour(GAC) model and chan-vese(C-V) model and makes the level set function stop evolution in the boundary of the target image.Finally,simulation experiments were implemented on images with poor boundaries and intensity inhomogeneity.Experimental results show the proposed model has high computational efficiency and accuracy.Furthermore,it is robust to noise.
MapReduce Based Feature Selection Parallelization
LU Jiang and LI Yun
Computer Science. 2015, 42 (8): 44-47. 
Abstract PDF(411KB) ( 425 )   
References | RelatedCitation | Metrics
Feature selection has become a necessary preprocessing procedure for high-dimensional data.With the explosive growth of data size,the traditional feature selection algorithm can not meet the current requirements of processing large-scale and high-dimensional data.Resorting to Google’s MapReduce programming model,we designed a distributed local learning-based feature selection algorithm D-logsf.Experiments were conducted on several real and synthesis data sets.The results show that the D-logsf algorithm is correct and has good reliability.Compared with traditional feature selection algorithm Logsf,D-logsf can obtain approximate linear speedup.Moreover,D-logsf can effectively handle large-scale data set.
Improved LSRC and its Application in Face Recognition
YIN He-feng, WU Xiao-jun and CHEN Su-gen
Computer Science. 2015, 42 (8): 48-51. 
Abstract PDF(670KB) ( 498 )   
References | RelatedCitation | Metrics
Recently,sparse representation based classification(SRC) has attracted much attention in face recognition tasks.SRC forms the dictionary by directly using all the training samples.When giving lots of training samples,the speed of the subsequent sparse solver can be very slow.To alleviate this problem,a new local SRC,which is based on the similarities of sparse coefficients of both training samples and test samples,was presented.According to this similarity,a certain number of training samples are selected to form the over-complete dictionary,and then the test sample is decomposed using this dictionary.In contrast to original LSRC,which is based on kNN to choose neighbors of test samples,the proposed approach can steadily achieve better performance.Experimental results obtained on the ORL database,Yale database and AR database indicate that the proposed method is superior to both SRC and LSRC.
Unsupervised Image Segmentation Based on Saliency Detection
ZHOU Jing-bo, REN Yong-feng and YAN Yun-yang
Computer Science. 2015, 42 (8): 52-55. 
Abstract PDF(925KB) ( 420 )   
References | RelatedCitation | Metrics
Interactive image segmentation needs the user interactions which increases the time complexity and the user’sburden.We proposed an unsupervised image segmentation algorithm based on visual saliency.First,mean shift (MS) algorithm is used to obtain initial segmentation without overlapping.The regions generated by MS are represented by a region adjacency graph (RAG) and an edge exists only if two regions are adjacent.Second,the color dissimilarity and texture consistency between the regions are computed,which are adjacent,as the weight of the edge in our RAG.Then,the proposed algorithm defines the saliency index (SI) according to the color and spatial information of each region gene-rated by MS algorithm.The region with maximal SI is defined as the seed of object,and the region with minimal SI in the boundary is defined as the seed of background.Finally,region merging is performed according to the strategy of maximize similarity around the seed of object and background.The results show that the proposed algorithm obtains better segment results without any interactive information and avoids oversegmentation compared with other unsupervised image segmentation.
PCA Face Recognition Algorithm Based on Local Feature
PANG Cheng, GUO Zhi-bo and DONG Jian
Computer Science. 2015, 42 (8): 56-59. 
Abstract PDF(318KB) ( 428 )   
References | RelatedCitation | Metrics
Principal component analysis is an important feature extraction method of pattern recognition,and the main features of the method are extracted from the sample by KL expansion.Therefore,we proposed a method of face recognition PCA expansion,called modular sorting PCA face recognition method(MSPCA).MSPCA method first divides the image matrix into blocks.The feature vectors,corresponded to all the feature value,are obtained from the sub-image matrix of all sub blocks by using PCA method,and the feature vectors are identified.Then the method finds out the feature vectors,corresponded to k maximum feature value of all eigenvalues.These feature vectors are taken to extract the characteristic of sub-image.Finally,based on the MSPCA,the feature matrix extracted from sub-images is merged,and then the combined feature matrix is as a new sample to implement PCA+LDA.Compared with the PCA method and PCA+LDA method,because of the use of sub-image matrix,MSPCA avoids using singular value decomposition theory,which makes it easier.Experimental results on ORL face database show that the proposed method outperforms the classical identification PCA and PCA+LDA methods.
Detection of Multi-label Data Streams Change Based on Probability of Relevance
SHI Zhong-wei and WEN Yi-min
Computer Science. 2015, 42 (8): 60-64. 
Abstract PDF(380KB) ( 421 )   
References | RelatedCitation | Metrics
Traditional detection approaches of concept drift mainly focus on single-label scenarios,however,not enough attention has been paid to the problem of mining from multi-label data streams.But applications of such data streams are common in the real world.These make it necessary to design efficient algorithms to detect concept drift for multi-label data streams.So after particularly analyzing the unique property label dependence of multi-label data streams,the paper proposed an algorithm of detecting concept drift based on the probability of relevance for multi-label data streams.The basic idea originates from the reason of concept drift and it describes the distribution of data streams by using the probability of relevance.Then,it estimates whether the concept drift occurs or not through monitoring the change of distribution between the old data and new data.The final experimental results show that the proposed algorithm can rapidly and accurately detect the concept drift and achieve prospective predictive performance for multi-label evolving stream classification.
Uneven Cluster Routing Algorithm Based on Node Location and Node Density
YAN Ran, YANG Yun, SHI Ting-jun, KONG Xiu-ping, XU Wen-chun and YANG An-ju
Computer Science. 2015, 42 (8): 65-69. 
Abstract PDF(521KB) ( 519 )   
References | RelatedCitation | Metrics
By analyzing the existing cluster routing algorithms,we proposed an uneven cluster routing algorithm based on node location and node density.In the stage of cluster heads election,we considered the residual energy of nodes and introduced a competition mechanism to select the cluster head.In the stage of clustering,we considered the distance from the node to the base station and node density to complete the unequal cluster,which can achieve the effect of a ba-lanced energy consumption of nodes,and can solve the routing of hot spot problem.In the phrase of cluster routing,in order to separate inter-cluster data forwarding task from the cluster head,we chose the communication cluster heads.Cluster heads only collect and integrate data within the cluster,while the communication cluster heads transmit data between clusters.This reduces the energy consumption of the cluster head.Experimental results show that the improved routing algorithm can effectively balance the network load and significantly prolong the network lifetime.
Personalized Tag Recommendation Algorithm Mixing Language Model and Topic
LI Hui, MA Xiao-ping, HU Yun and SHI Jun
Computer Science. 2015, 42 (8): 70-74. 
Abstract PDF(462KB) ( 441 )   
References | RelatedCitation | Metrics
More and more content on the Web is generated by users.To organize this information and make it accessible via current search technology,tagging systems have gained tremendous popularity.We introduced an approach to personalized tag recommendation that combines a probabilistic model of tags from the resource with tags from the user.In this models,we investigated simple language models as well as Latent Dirichlet Allocation.Extensive experiments on a real world dataset crawled from a big tagging system show that personalization improves tag recommendation,and our approach significantly outperforms traditional approaches.
Research on Sequence-based Predictor for GPCR-Drug Interaction Prediction
DING Lin-song and ZHENG Yu-jie
Computer Science. 2015, 42 (8): 75-77. 
Abstract PDF(306KB) ( 431 )   
References | RelatedCitation | Metrics
Accurately identifying whether a G-protein-coupled receptor(GPCR) will interact with a drug is a crucial step in drug discovery.However,experimentally determining the interactions between GPCR and drug is time-consuming and expensive.Hence,developing automated prediction methods for GPCT-Drug interaction prediction solely from protein sequence is in urgent need.In this study,a new sequence-based method for GPCT-Drug interaction prediction was proposed.Evolutionary information feature from protein sequence and footprint feature from drug were combined to form discriminative feature.And the optimized evidence-theoretic K-nearest neighbor(OET-KNN) prediction algorithm was taken as classifier.Experimental results demonstrate that the proposed method achieves good performance and can act as complementary predictor to existing methods.
Modular MMC and its Application in Face Recognition
LIU Hui, WAN Ming-hua and WANG Qiao-li
Computer Science. 2015, 42 (8): 78-81. 
Abstract PDF(568KB) ( 424 )   
References | RelatedCitation | Metrics
Maximum margin criterion(MMC) algorithm for feature extraction only extracts global features while local features can not be effectively extracted.So,an improved version of maximum margin criterion(MMC) named modular maximum margin criterion(MMMC) was proposed in this paper.First,in proposed approach,the original images are divided into modular images,which are also called sub-images.Then,MMC method is directly used to extract the features of the sub-images from the previous step.Features of sub-images are combined into global features.At last,the recognition results are obtained by nearest neighbor(NN) classifier.The results of test on ORL,Yale and AR face database show that the proposed algorithm with respect to the MMC algorithm has better recognition performance.
Improved Gene Read Mapping Algorithm Based on MapReduce
TU Jin-jin, YANG Ming and GUO Li-na
Computer Science. 2015, 42 (8): 82-85. 
Abstract PDF(331KB) ( 401 )   
References | RelatedCitation | Metrics
Parallel read mapping algorithms become a hotspot in recent years,since the high-throughput sequence technology generates massive reads.Genetic matching algorithm was studied and an improved gene read mapping algorithm which could reduce the complexity of the algorithm by using Hadoop distributed cache mechanism and integrating biological information was proposed.The experimental results on the Arabidopsis gene data sets show that the proposed improved algorithm can effectively improve the algorithm efficiency and reduce the algorithm running time.
Moving Target Detection Using Fusion of Visual and Thermal Video
ZHANG Sheng, YAN Yun-yang and LI Yu-feng
Computer Science. 2015, 42 (8): 86-89. 
Abstract PDF(1197KB) ( 478 )   
References | RelatedCitation | Metrics
In outdoor environments,visible light camera can get rich texture and spectral information in the scene,but they are greatly influenced by illumination changes.On the contrary,thermal infrared camera is not sensitive to light and it is still able to work effectively under night.But the thermal infrared images have less color information and lower contrast.In order to make full use of the complementary information of infrared and visible light for detection target,a novel method based on Gaussian mixture model with RGBT was proposed for moving target detection more accurately and robust.This method adds the thermal infrared images as the fourth component to the conventional Gaussian mixture model to improve the positive detection rate.Meanwhile,the shadow removal algorithm is introduced to reduce the impact of shadows caused by the ambient illumination changes,so the robustness of proposed method is enhanced.Experimental results show that the suggested method not only achieves the higher detection accuracy and more complete object,but also meets the real-time requirements better compared to the conventional Gaussian mixture models.
Energy-efficient Multi-hop Routing Protocol for Wireless Sensor Networks
CHEN Zhan-sheng and SHEN Hong
Computer Science. 2015, 42 (8): 90-94. 
Abstract PDF(507KB) ( 410 )   
References | RelatedCitation | Metrics
In order to address the node energy consumption imbalance issue caused by clustering-based multi-hop routing protocol in wireless sensor networks,a novel clustering-based energy-efficient multiple-hop routing protocol(EEMR) was proposed.Initially,the network is divided into clusters based on nodes proximity.Then,adaptive cluster-head round-robin schema is used to optimize energy consumption among nodes communication in each cluster,and fitness routing algorithm based on high residual energy-short path-radial angle for traffic load balance and energy consumption among traffic-heads is used to deal with the node energy consumption imbalance inssue occurred in multi-hop routing protocols.Simulation shows that EEMR protocol can effectively balance the energy consumption among nodes in WSN,significantly improving WSN lifetime and network utilization.
QoS-aware Resource Block Allocation and MCS Selection for LTE-A Femtocell Downlink
LI Long-fei, CHEN Xin and XIANG Xu-dong
Computer Science. 2015, 42 (8): 95-100. 
Abstract PDF(546KB) ( 432 )   
References | RelatedCitation | Metrics
We addressed the problem of joint resource block(RB) allocation and modulation-and-coding scheme(MCS) selection for long term evolution-advanced(LTE-A) femtocell downlink.We first formulated the problem as an integer linear program(ILP) whose objective is to maximize the total throughput of a closed femtocell,while guaranteeing minimum throughput for each user.The throughput is one of the most important quality of service(QoS) metrics to measure the network performance.In view of the NP-hardness of the ILP,we then proposed an intelligent optimization algorithm called ACOGA with reduced polynomial time complexity.The proposed ACOGA algorithm applies the genetic algorithm(GA) to optimize the parametric configuration of the conventional ant colony optimization(ACO) algorithm,thereby speeding up the rate of convergence and improving the solution quality.Simulation results show that compared to the conventional ACO algorithm with static parametric configurations,the ACOGA algorithm can improved the system throughput by over 12% and achieves a faster rate of convergence.
Study on Capacity of Route Aggregate Networks Based on Slotted Transmission Protocol
LI Yue-xin and ZHU Ming
Computer Science. 2015, 42 (8): 101-105. 
Abstract PDF(406KB) ( 466 )   
References | RelatedCitation | Metrics
As the stochastic network topology and route selection in wireless ad hoc networks,it is complicated to analyze the capacity.The effects of energy constraints on the capacity of wireless ad hoc networks were studied.The network is capable of route aggregating,and the node transmission process is based on slotted contention protocol.Under the assumption that node transmission is triggered when the energy is big enough,a capacity analysis model was proposed based on the closed queueing network.The proposed model considers the details of data,energy buffers and random access protocol,simultaneously.Then the impacts of energy constraints on the random access protocol design parameters were analyzed to optimize the network performance.Finally,the effects of energy constraints on maximum stable throughput,stability region and packet drop rate were evaluated.Simulation results validate the accuracy of the proposed analytical model.
Dynamic Role Property Based Trust Model for MP2P Networks
CAO Xiao-mei and HU Wen-jie
Computer Science. 2015, 42 (8): 106-111. 
Abstract PDF(477KB) ( 447 )   
References | RelatedCitation | Metrics
Aiming at the poor stability and high dynamic characteristics of terminals in MP2P network,we proposed a dynamic role property based trust model for MP2P networks.This trust model introduces a trade stable factor and time experience value, and the model distributes different roles to nodes dynamically based on nodes’ time experience value and its contribution in different time.The more important the role is,the more weight it has when calculating trust value.The more important the role is,the larger upper limit will be allowed when downloading resource.Simulation proves that this model can prevent effects from unstable nodes,improve stable nodes functions,reduce effects from “free rides” and hypocritical nodes,and improve trade success rate.
Analysis on Energy-saving Task Scheduling Strategy Based on Stochastic Petri Net for Cloud Computing
ZHAO Bin, WANG Nao and WANG Gao-cai
Computer Science. 2015, 42 (8): 112-117. 
Abstract PDF(504KB) ( 385 )   
References | RelatedCitation | Metrics
For the current high energy consumption problem in cloud computing,the paper proposed a task scheduling strategy,called first scheduling with the minimum energy(FSME).The strategy firstly considers working states of servers when the tasks are scheduled.And then the algorithm schedules the tasks to corresponding severs in terms of the minimum energy rules with respond time constraint.If all of the working servers can not meet the response time requirement of the current task,the algorithm will consider the idle servers and schedule the task to the lowest execution energy consumption server.Stochastic Petri net was used to model the algorithm and analyze the energy consumption and the performance.Simulation results show that the FSME can improve the energy efficiency while meeting the quality of service requirement.
Research on Cloud Resource Scheduling Method Based on Map-Reduce
ZHANG Heng-wei, HAN Ji-hong, WEI Bo and WANG Jin-dong
Computer Science. 2015, 42 (8): 118-123. 
Abstract PDF(498KB) ( 395 )   
References | RelatedCitation | Metrics
To improve the computing efficiency of Map-Reduce resource scheduling,a multi-objective resource scheduling model with QoS restriction was built.The model considers the scheduling problem of both Map and Reduce phase.A chaotic multi-objective particle swarm algorithm was proposed to solve the model.The algorithm uses the information entropy theory to maintain non-dominated solution set so as to retain the diversity of solution and the uniformity of distribution.On the basis of using Sigma methods to achieve fast convergence,chaotic disturbance mechanism was introduced to improve the diversity of population and the ability of algorithm global optimization,which can avoid the algorithm to fall into local extremism.The experiments show that the number of iteration in the algorithm obtaining solutions is little and non-dominated solutions distribute equably.It indicates that the astringency and the diversity of solution set of this algorithm are better than the traditional multi-objective particle swarm algorithm in solving Map-Reduce resource scheduling problems.
Data Sharing Scheme Based on Multi-hop Device-to-Device(D2D) Forwarding
WANG Jun-yi, GONG Zhi-shuai, FU Jie-lin and QIU Hong-bing
Computer Science. 2015, 42 (8): 124-127. 
Abstract PDF(860KB) ( 339 )   
References | RelatedCitation | Metrics
The D2D links with good quality and high achievable data rates can be preferentially selected to improve the efficiency of data dissemination in a cluster.Therefore,a multi-hop D2D data forwarding scheme was induced.The proposed scheme can adaptively select the proper relays,routing and hop count according to the pre-defined data rate threshold and D2D links quality.In the paper,three rules were designed to set the threshold,and the formulas of resource cost for the corresponding algorithms were derived.Simulation results show that the three algorithms gradually gain an improvement of resource utilization and the best of them achieves a significant performance compared to the existing single-hop schemes.
Node Importance Ordering for Topology Structure of Cyber-physical Systems
YANG Zhi-cai, QIU Hang-ping, QUAN Ji-chuan and LEI Zhi-peng
Computer Science. 2015, 42 (8): 128-131. 
Abstract PDF(307KB) ( 426 )   
References | RelatedCitation | Metrics
The node importance ordering for topology structure of cyber-physical systems(CPS) is an critical aspect of topological analysis.A topology model for CPS named interactive network model was established according to the intrinsic structural features.Then the node interactive betweenness was defined considering the information interacting characteristics of CPS,which can reflect relative importance of each node.The effective node importance ordering algorithm was presented,of which the time complexity is polynomial.Finally,a topology instance for CPS was analyzed,and the result was compared with node betweenness,suggesting that node importance ordering can provide valuable re-ference for the running and protection of CPS.
Energy-efficient Wireless Sensor Network Data Storage and Query Mechanism Based on Virtual Ring
DENG Xiao-jun, OU-YANG Min and LI Yu-long
Computer Science. 2015, 42 (8): 132-135. 
Abstract PDF(597KB) ( 384 )   
References | RelatedCitation | Metrics
In order to improve the efficiency of data storage and query in wireless sensor networks,minimize the energy consumption of the network and extend the life cycle of a network to work simultaneously,an energy-efficient wireless sensor network data storage and query mechanism based on the virtual ring was proposed.Sensor network domain will be divided into many virtual ring network,then the collection ring is defined by a function of the value of the cost based on minimum network energy consumption,and the storage and query tasks of event data are executed through collection ring.The simulation results show that compared with other energy-saving programs and data storage solutions,the proposed SDS obtains better results in the extension of the network life cycle and the improvement of energy efficiency.
Mimic Community Clustering of Society Networks
CHENG Ping-guang
Computer Science. 2015, 42 (8): 136-137. 
Abstract PDF(514KB) ( 380 )   
References | RelatedCitation | Metrics
The formation and evolution of social network are a dynamic process.Mimic computing can dynamically meet the users’ needs based on resources,tasks,safety,effectiveness and service,so it is a useful method to adapt to the dynamic social network.The paper proposed a mimic community clustering(MCC) based on traditional SIR model and mimic computing.The results of experiment performed on real data show that the method has high value.
Improved BLP Model Based on CRFs
MA Meng, TANG Zhuo, LI Ren-fa and XIONG Liao-te
Computer Science. 2015, 42 (8): 138-144. 
Abstract PDF(672KB) ( 392 )   
References | RelatedCitation | Metrics
As most access control models are short of the ability to perceive the system security status and risks in a dynamic way,the paper introduced a machine learning method CRFs into the rule optimization of BLP model,and proposed a dynamic BLP model,CRFs-BLP.After preprocessing and tagging the history access log,it will extract the feature set,then CRF++ toolkit will be taken to finish the study and training of these datasets,so the model can be adjusted dynamically according to the current secure state and events in system,and the read-write scope for sensitive objects will be limited dynamically.Finally,the experiment shows the availability and accuracy of the model in a real environment.
Blind Signature-based Handover Authentication Protocol with Conditional Privacy Preserving in LTE/LTE-A Networks
QIN Ning-yuan, FU An-min and CHEN Shou-guo
Computer Science. 2015, 42 (8): 145-151. 
Abstract PDF(850KB) ( 507 )   
References | RelatedCitation | Metrics
LTE/LTE-A is designed to provide low handover latency for mobile applications,but there still exist some complexities and security vulnerabilities.To solve the vulnerabilities in the LTE standard and traditional handover authentication protocols,a new handover authentication protocol based on the blind signature was designed.In the registration phrase,the authentication keys used in the handover phrase are provided by the blind signature,and in the handover phrase,anonymity,untraceability and conditional privacy preserving are achieved by the exchange of the pseudonyms and the reversibility of the real identity.The theoretical analysis and simulation results show that the proposed protocol not only satisfies more security properties,but also has better performance compared with the LTE standard and other handover authentication protocols.
Study on Disaster Recovery Capability Evaluation Approach Based on Cloud Model
YAO Wen-bin, WANG Zhen, ZHAO Ling and YAO Xiang
Computer Science. 2015, 42 (8): 152-156. 
Abstract PDF(435KB) ( 445 )   
References | RelatedCitation | Metrics
The existing index system about disaster recovery capability often takes insufficient account of the dynamic index.At the same time,the existing evaluation methods do not make analysis on system stability,and the weight calculation method is far too subjective.To solve these problems,a new index system and an evaluation approach of disaster recovery capability were proposed.The composite indicators,such as RPO and RTO,were added into the index system.Combined with cloud model,the paper proposed a method for normalizing the index value based on interval number and a comprehensive empowerment model based on G1 and entropy method.This evaluation constructs the cloud model for disaster recovery capability,which implements the conversion from qualitation to quantitation.Then the experiment confirms the feasibility of this method.
Trust Distance Based Malicious Nodes Detection Method in Vehicular Ad Hoc Network
WU Hai-qin and WANG Liang-min
Computer Science. 2015, 42 (8): 157-160. 
Abstract PDF(679KB) ( 573 )   
References | RelatedCitation | Metrics
With the requirement of rapid detection of malicious nodes in high-speed VANET,a trust model was proposed on the basis of bayes hypothesis,aiming at solving the issues that the topology of VANET changes quickly and is more vulnerable to malicious internal attack than traditional mobile ad-hoc network.The model intensifies the influence of negative events,to eliminate the malicious recommendations in advance and avoid collusion attack.The concept of “recommendation trust distance” is introduced as trust metric of recommendation trust when integrating recommendation trust.Compared with the current detection method based on trust,this model speeds up the detection speed and simplifies the recommend delivery.The simulation experiment shows that this trust model has rapid detection speed and good performance in detecting malicious nodes from the network packet loss rate and the detection rate of malicious nodes.
Research on Invulnerability of IPv6 AS-level Internet
QIN Li, HUANG Shu-guang and CHEN Xiao
Computer Science. 2015, 42 (8): 161-165. 
Abstract PDF(1010KB) ( 424 )   
References | RelatedCitation | Metrics
With the rapid development of Internet and the Internet of Things,it is an inevitable trend to transfer the communication protocol from IPv4 to IPv6.This paper collected the latest IPv6 AS-level Internet data from the project of CAIDA Ark(2014.6).By modeling this data based on complex network,we found that the IPv6 Internet also has small-world and scale-free properties.Furthermore,on the basis of analyzing the structure of the IPv4 Internet and the commonly measures of invulnerability,we proposed the measures of invulnerability for the IPv6 Internet and the method of how to do the invulnerability experiment.The results show that this measures can do well in describing the performance of invulnerability.Under different attack strategies,of which the attack based on the order of degree can make the largest effect of damage,we found that the IPv6 Internet has the characteristics of robustness and vulnerability.
Static Security Policy Consistency Detection Based on Semantic Similarity
TANG Cheng-hua, WANG Li-na, QIANG Bao-hua, TANG Shen-sheng and ZHANG Xin
Computer Science. 2015, 42 (8): 166-169. 
Abstract PDF(414KB) ( 483 )   
References | RelatedCitation | Metrics
The security policy semanteme is the expression of human control safety behavior will.Aiming at the problem of the policy semantic conflicts existing in the definition and conversion process,a static security policy consistency detection algorithm based on the semantic similarity was proposed.Firstly,the domain ontology of the security policy is established,the characteristic factor is extracted,and then the calculation method of semantic similarity based on the ontology concept features is presented.Secondly,the firewall security policy is used as an example to establish a detection model,and the static security policy consistency detection algorithm is used to mark the confict policy,ensuring the consistency of the final policy rule base.Experimental results show that this method has better detection effect,and provides a feasible way to solve the security policy conflicts in the stage of definition,making and mapping.
Research of Danger Signal Extraction Based on Changes in Danger Theory
YANG Chao and LI Tao
Computer Science. 2015, 42 (8): 170-174. 
Abstract PDF(418KB) ( 439 )   
References | RelatedCitation | Metrics
Danger theory is an important research branch in artificial immune system.It starts from the perspective of danger to describe the working principle of immune system in a new way,which has been widely used in intrusion detection,machine learning,data mining and so on.The primary issue of establishing a danger theory model is how to extract danger signals adaptively.This paper started from the main idea of changes leading to danger,and established an adaptive danger signal extraction model based on finding changes.According to the characteristics of different types of system resources,it designed two danger signal extraction methods:value changes and feature changes.The experiment verifies that this model can adaptively extract danger signals without relying on prior knowledge.
Research on Rootkit Detection Method Based on Neural Network Expert System in Virtualized Environment
ZHAO Zhi-yuan, ZHU Zhi-qiang, SUN Lei and MA Ke-xin
Computer Science. 2015, 42 (8): 175-179. 
Abstract PDF(425KB) ( 372 )   
References | RelatedCitation | Metrics
In order to solve the problems about the high misjudgment ratio of Rootkit detection and undetectable unknown Rootkit in the virtualization guest operating system,a Rootkit detection method(QPSO_BP_ES) based on neural network expert system was proposed.The detection system combines neural network with expert system,which can take advantage of them.In the actual detection,QPSO_BP_ES firstly captures the previously selected Rootkit’s typical characteristic behaviors.And then,the trained system detects the presence of Rootkit.The experimental results show that QPSO_BP_ES can effectively reduce the misjudgment ratio and detect both known and unknown Rootkit.
Receipt-freeness Electronic Voting Scheme Based on FOO Voting Protocol
LUO Fen-fen, LIN Chang-lu, ZHANG Sheng-yuan and LIU Yi-ning
Computer Science. 2015, 42 (8): 180-184. 
Abstract PDF(428KB) ( 692 )   
References | RelatedCitation | Metrics
The secure and pratical electronic protocol is one of the hot research topics in the information security.A new receipt-freeness electronic voting scheme based on FOO voting protocol was proposed by using the tools of the blind signature,serial number and unique identity in this paper.The proposed scheme improves the FOO voting protocol and it holds the properities of the anonymity of the ballot,the verifiability and receipt-freeness.In addition,the voter can abstain from voting.Specially,the proposed scheme not only has all advantages of FOO voting protocol but also enhances the security and flexibility.Therefore,it is more universal and practical than the previous protocols.
Privacy-preserving Framework for Cloud Services Based on User Behavior
JI Zheng-bo, BAI Guang-wei, SHEN Hang, CAO Lei and ZHU Rong
Computer Science. 2015, 42 (8): 185-189. 
Abstract PDF(444KB) ( 366 )   
References | RelatedCitation | Metrics
In response to the issue that user behaviors threaten security and privacy in mobile cloud computing,a ring-identity mechanism based on the third party was proposed in this paper.The access control of user’s identity ensures that it can not be able to track user’s virtual identity by providing user with ring identity certificate.The focus of data auditing is on how to schedule data records and generate ring signature so as to avoid leaking position of critical data and to protect user’s privacy.Theoretical analysis shows that our framework achieves good security and privacy perfor-mance considering the threat of user’s behavior records.
Research on Independence Rekey Model for Group Key Management
ZHOU Jian and SUN Li-yan
Computer Science. 2015, 42 (8): 190-193. 
Abstract PDF(394KB) ( 488 )   
References | RelatedCitation | Metrics
In group key management,much time is expended in rekeying due to non update members taking part in rekeying to distribute key material.To solve the problem,a novel independence rekeying model was presented,and then an independence group key management was designed based on bilinear pairing and shared production of threshold cryptography,which meets key independence and all non update members don’t participate in rekeying for key joining/lea-ving operation.The fresh public encryption key cannot break the validity of secret decryption keys belonging to non update members.Therefore,the rekeying delay time and computation cost of proposed scheme are reduced efficiently.It also meets the independence rekeying model logically,moreover the scheme is suitable to wireless networks whose time delay and computation are limited strictly.
New k-anonymization Algorithm for Preventing Historical Attacks
LI Xiang and SUN Hua-zhi
Computer Science. 2015, 42 (8): 194-197. 
Abstract PDF(591KB) ( 441 )   
References | RelatedCitation | Metrics
Aiming at the problem of continuous query privacy in location-based services,a new historical attacks prevention k-anonymization was proposed.The algorithm uses the position,speed and direction of users around to predict these users’ future position,then uses these positions to calculate the increased area of the anonymous region caused by joining users into the set at different future time points.The smaller the sum of these increased areas is,the higher the user’s priority to add into the anonymous set is.This paper simulated the k-anonymity algorithm on OPENT 14.5 platform.The simulation results show that the size of anonymity region formed by the proposed algorithm is appropriate,which can both protect the privacy of users and guarantee a certain quality of service.
Construction Method for Fault Tree Domain Ontology Supporting SWRL Rules
ZHOU Liang, HUANG Zhi-qiu and HUANG Chuan-lin
Computer Science. 2015, 42 (8): 198-202. 
Abstract PDF(942KB) ( 641 )   
References | RelatedCitation | Metrics
Fault tree (FT) has been widely applied in rapid location of system faults.However,lacking of accurate semantic information makes it hard to avoid the problem of duplicate construction.This paper introduced ontology into fault tree domain and studied how to construct a FT domain ontology and SWRL.First,it proposed a method of know-ledge representation of FT with Web ontology language OWL and constructed a FT domain ontology which is shareable,reusable and extensible.Second,it transformed the logical relationship among events of FT into semantic Web rule language(SWRL).Finally,it put these SWRL rules and the FT ontology into an inference engine JESS.Then new knowledge is produced and it is exploited for the rapid location of system faults.The causes of events can be rapidly and efficiently located in FT.The experiment proves the correctness and effectiveness of proposed method.It can resolve the problem of duplicate construction of FT without any influence on the rapid location of system faults.
Research on Formal Verification of Embedded Operating System
CHEN Li-rong, LI Yun and LUO Lei
Computer Science. 2015, 42 (8): 203-214. 
Abstract PDF(1368KB) ( 862 )   
References | RelatedCitation | Metrics
A layered formal model for an automotive embedded real-time operating system was presented.At the lower layer,the sequential kernel plays the infrastructural role in executing switching between concurrent entities such as tasks,ISRs and system services,and at the higher layer the concurrent system services are provided to users.The two layers of the model have different views of configurations and operation granularities.As the most important safety related feature,the memory isolation and protection mechanism between applications and the OS is modeled in the sequential kernel.The implementation correctness theorem of the OS was established along with the corresponding simulation relation and implementation invariants.According to the features of the model and the related implementation languages,the OS was formally and effectively verified with a combined usage of the theorem prover Isabelle/HOL and the program verifier VCC.
Aspect Tracing in Aspect Oriented Business Process Modeling
NI Shan-shan, ZHANG Xuan, LI Tong and ZHANG Rui-yun
Computer Science. 2015, 42 (8): 215-219. 
Abstract PDF(395KB) ( 406 )   
References | RelatedCitation | Metrics
Reducing the complexity of models is an important issue in the business process management(BPM) area.Aspect oriented business process modeling advocates separating different concerns from the main process and modeling them separately,and then combining aspects and main process with a certain weaving mechanism.But there is a problem on how to verify the impacts of weaved aspect on the main process.We presented and realized an aspect tracing method in aspect oriented business process modelling based on Petri-nets,discussed a case study based on the banking business process to demonstrate the proposed approach.
Web Service Discovery Model of Semantic and Trust QoS Based on Hadoop
HE Xiao-xia and TAN Liang
Computer Science. 2015, 42 (8): 220-224. 
Abstract PDF(990KB) ( 375 )   
References | RelatedCitation | Metrics
With the rapid growth of the Web service applications,how to select Web service meeting users’ QoS requirements more accurately is an emergent problem from many Web services with similar functions.There are two problems in the research work:one is the QoS quantitative problems of objectivity and accuracy,and the other is that the QoS match is also limited to the matching of numerical value and the lack of semantic description for quality of service.To solve the two problems,using the Hadoop distributed registration and service-seeking mode,we designed the Web service discovery model of semantic and trust QoS based on Hadoop.In the pattern,considering two aspects of QoS semantic match and numerical match,first,we used subjective and objective weighting mode to empower multi-dimension QoS properties,improving the objectivity and accuracy of QoS properties,and added credibility parameters to increase the trust of the Web service QoS attributes.Second,we extended the QoS ontology on the OWL-S to meet customer demand for the semantic matching of QoS.Experimental results show that this model can effectively solve the problems of query bottleneck and single point of failure,and accurately find Web services meeting the needs of users according to users’ QoS requirements.
REPS:An Efficient Fault-tolerant Approach for Parallel Skyline Queries over Probabilistic Data Streams
ZHANG Wei-hua, LI Xiao-yong, MA Jun and YU Jie
Computer Science. 2015, 42 (8): 225-230. 
Abstract PDF(669KB) ( 422 )   
References | RelatedCitation | Metrics
The parallel Skyline query over probabilistic data streams,as an important aspect of big data analysis,plays an important role in various applications.To deal with the problem that the query results may be incorrect and interrupted,due to various faults occurred in the process of parallel Skyline queries over probabilistic streams,a replication-based fault-tolerant distributed parallel Skyline query scheme named REPS was proposed.Specifically,the compute nodes are also regarded as the replication nodes,and a layer-alternation strategy for replica placement is proposed in REPS.Thus,the lost data can be recovered efficiently by selecting the replicas with high priority.Moreover,the processes of fault detection,data recovery and query recovery are throughout the whole query updating process,in order to reduce the communication and computation overhead and achieve rapid fault-tolerant parallel query processing.Extensive experimental results demonstrate that REPS method not only has high efficiency when no failure occurs or a single node fails,but also can maintain a high processing rate and meet the query requirement even when multiple failures occur.
Reverse Nearest Neighbor Query Based on Voronoi Diagram for Road Network
ZHANG Li-ping, JING Hai-dong, LI Song and CUI Huan-yu
Computer Science. 2015, 42 (8): 231-235. 
Abstract PDF(495KB) ( 407 )   
References | RelatedCitation | Metrics
In view of the shortage of exisiting reverse nearest neighbour(RNN)query method in road network,NVD-RNN algorithm making use of network Voronoi diagram(NVD) has good effect.The algorithm divides the road network into small Voronoi regions and adopts two processes:filtering and refining processes.The filtering process mainly stores possible query results in advance.The major task of refining process is to find query results from potential outcome sets,in addition it provides ADDNVD-RNN algorithm to dispose of newly increased points and DENVD-RNN algorithm to cope with deleted points any further.The experiment demonstrates that the algorithm has obvious advantages in dealing with reverse nearest neighbor in road network.
Study on Skyline Query for Vague Database
ZHAO Fa-xin and JIN Yi-fu
Computer Science. 2015, 42 (8): 236-239. 
Abstract PDF(385KB) ( 424 )   
References | RelatedCitation | Metrics
Skyline query processing has recently received a lot of attention in the field of database.Due to a lot of information is often imprecise and uncertain in the real world,Skyline queries have become an important content of fuzzy data processing.Based on the existing research,Skyline query processing based on the Vague relational data model was discussed.In this framework,Skyline queries aim at computing the extent to which any tuple of a given relation is not domi-nated by any other tuples of the same relation.And the corresponding query formula and query algorithm were given.The key for efficiency lies in the fact that the algorithm does not require to make computations explicitly over all the possible worlds,but works directly on the Vague relational databases.On the basis,processing method of Skyline query with preselection condition was discussed.
Rough Set Model Based on Grade Indiscernibility Relation
QIN Ke-yun and LUO Jun-fang
Computer Science. 2015, 42 (8): 240-243. 
Abstract PDF(287KB) ( 430 )   
References | RelatedCitation | Metrics
The indiscernibility relation is a key notion of rough set theory.We proposed the grade indiscernibility relation for information system to characterize the difference between the grades of discernibility.The rough set model based on grade indiscernibility relation was presented with its basic properties being discussed.Furthermore,the relationship between grade approximation operators and Pawlak approximation operators was analyzed.
New Fast Manifold Learning Algorithm Based on MSC and ISOMAP
LEI Ying-ke
Computer Science. 2015, 42 (8): 244-248. 
Abstract PDF(1200KB) ( 446 )   
References | RelatedCitation | Metrics
For the high complexity problem of the isometric feature mapping algorithm(ISOMAP),we designed a new fast isometric feature mapping(Fast-ISOMAP) method based on minimum set cover(MSC) strategy.It is found in experiments that Fast-ISOMAP can greatly improve the computational efficiency of the original ISOMAP and be used in large-scale manifold learning problems under the condition that it does not significantly change the performance of ISOMAP.Experimental results on many artificial benchmark datasets show the effectiveness of our proposed algorithm.
Dynamic Multi-objective Particle Swarm Optimization Algorithm Based on Human Social Behavior
WU Da-qing, ZHENG Jian-guo, ZHU Jia-jun and SUN Li
Computer Science. 2015, 42 (8): 249-252. 
Abstract PDF(405KB) ( 393 )   
References | RelatedCitation | Metrics
In order to improve the processing performance of the multi-objective optimization problem,reduce the computational complexity and improve the convergence of the algorithm,a multi-objective particle swarm optimization algorithm based on a human social behavior was proposed.The strategies such as promotion/resistance factor and the local jump strategy are introduced in proposed algorithm,to make the algorithm have strong global search ability and good robust performance.Some typical multi-objective optimization functions were tested to verify the algorithm.The results show that the proposed algorithm has superior performance of fast convergence speed and strong ability to jump out of local optimum,so it can be used for many fields.
Self-adaptive Artificial Bee Colony Algorithm Based on Mean Entropy Strategy
XU Shuang-shuang, HUANG Wen-ming and LEI Qian-qian
Computer Science. 2015, 42 (8): 253-258. 
Abstract PDF(486KB) ( 715 )   
References | RelatedCitation | Metrics
In order to overcome the shortcomings that artificial bee colony (ABC) traps into local optima and premature easily,an improved artificial bee colony algorithm named ASABC algorithm was proposed.The new algorithm adopts mean entropy tactic to initialize population,which can increase the diversity of population and avoid the stagnation and premature.At the same time,the new algorithm adopts the strategy which can adjust the neighbour seletion step size adaptively to improve the local search ability and calculation precision.To balance the global search ability and the local search ability,the self-adaptive proportion selection strategy is used to replace the fitness proportion selection method of the ABC algorithm. The results of the simulation experiment on a suite of eight benchmark functions show that the new algorithm has remarkable local search ability and a faster convergence rate compared with three common intelligent optimization algorithms.
Global Positive Region Inconsistency Based Attributes Core Computation
ZHAO Jie, LIANG Jun-jie, DONG Zhen-ning, CHEN Xu and TANG De-yu
Computer Science. 2015, 42 (8): 259-264. 
Abstract PDF(462KB) ( 411 )   
References | RelatedCitation | Metrics
This paper firstly proposed basic algorithms of positive region and equivalence class based on bit vector and improved Hash algorithm.Then the core attributes computation algorithm was designed based on global positive region inconsistency.Different from current algorithms which need to compute complete positive regions repeatedly when seeking attributes core,this paper studied the characteristics of core attributes,and caught the inconsistencies between the positive region of C-{ai} and the global positive region.The complete positive regions of C-{ai} don’t need to be computed repeatedly.Global positive region inconsistency based attributes core recognition was proved by 3 theories.21 data sets of UCI,ultra-high-dimensional data sets and massive data sets were used to test the algorithms proposed by this paper.And the results show the attributes core computation algorithm of this paper owns good performance no matter when the number of entities and attributes is more or less and especially is suitable for processing large decision table.
Methodology of Attribute Weights Acquisition Based on Three-way Decision Theory
XUE Zhan-ao, ZHU Tai-long, XUE Tian-yu and LIU Jie
Computer Science. 2015, 42 (8): 265-268. 
Abstract PDF(436KB) ( 495 )   
References | RelatedCitation | Metrics
Aiming at the problems of subjectivity in weight rules and uncertainty in the parameter value through traditional decision-making process,the methodology of attribute weights acquisition was studied based on the rough set and three-way decision theory.Two definitions of the attribute confirmation and attribute reduction were redefined respectively,a new method of attribute weights aquisition was presented and also compared with other methods by example,and then the efficiency of the presented method was demonstrated.This method is objective evaluation of attribute,which is based on data and without prior information.Using this method,decision maker can make decision practically and get the more reasonable weight distribution.This thesis has some value of theory about the study of attribute weight.
Optimized Semantic Extraction Algorithm of Complex Subtle Differentiated Network Data Characteristics
YANG Wei-jie
Computer Science. 2015, 42 (8): 269-272. 
Abstract PDF(619KB) ( 447 )   
References | RelatedCitation | Metrics
The complex subtle differentiated network data need to be extracted for semantic,and it is a key technology to realize the accurate identification and retrieval of Web network data.The network data have nonlinear and random distribution,subject distribution is wide,and the update frequency is fast,so it is difficult to extract.A semantic optimization feature extraction algorithm for network data was proposed based on the Dopplerlet transform projection.The semantic Gauss edge rectangle window function is given,and data fusion system of difference fusion filtering is constructed.The text segmentation is taken,and massive data of information entropy are constructed,which can effectively elimi-nate the abnormal data in the cluster.Self similar characteristics of Dopplerlet transform are used for matching the projection.Nonlinear adaptive matching semantic spectrum feature is extracted,and maximal linearly independent group is searched out at the Hilbert subspace.Simulation results show that the algorithm can increase the feature semantic expression ability,effectively distinguish the differences between redundant data and residual data in network data,and improve the subtle error detection and retrieval capabilities for heterozygous network.
Intelligent Optimization Algorithm for Interference Coordination in LTE-A Femtocell System
GAO Chao-xin, CHEN Xin and XIANG Xu-dong
Computer Science. 2015, 42 (8): 273-278. 
Abstract PDF(485KB) ( 403 )   
References | RelatedCitation | Metrics
In LTE-A femtocell systems,the dense deployment of femtocells may cause strong inter-cell interference(ICI),resulting in the degradation of cell throughput and quality of service(QoS) received by femto user equipments(FUEs).Fractional frequency reuse(FFR) has been widely recognized as an efficient solution to this problem.We proposed an intelligent fractional frequency reuse(I-FFR) algorithm for interference coordination in two-tier LTE-A femtocell systems.In pursuit of high system throughput and low user outage probability,the proposed I-FFR algorithm utilizes a genetic algorithm and a graph-annealing algorithm to achieve adaptive FFR by optimizing proportion of the cell center region(CCR) and clustering of the cell edge region(CER).Simulation results show that,compared to the conventional FFR-3 interference coordination scheme,the proposed I-FFR algorithm improves the system throughput by over 15%,and decreases the outage probability of average CER user from 85% to 40%.
Study of Semantic Understanding by LDA
GAO Yang, YANG Lu, LIU Xiao-sheng and YAN Jian-feng
Computer Science. 2015, 42 (8): 279-282. 
Abstract PDF(405KB) ( 521 )   
References | RelatedCitation | Metrics
Latent Dirichlet allocation(LDA) is a popular model used in text cluster,and is proved to improve the performance of information retrieval by explaining queries and documents effectively.There are mainly two algorithms to solve the inference of LDA model:Gibbs sampling and belief propagation.This paper compared the effect of these two inference algorithms on information retrieval in different topic scales,and used two different ways to explain queries and documents.One way is representing them with document-topic distribution,the other is representing them with word refactoring.Experimental results show that document-topic distribution and Gibbs sampling inference algorithm can improve the performance of information retrieval.
Real-time Positioning Algorithm Based on Clustering,Spatial Diversity and Continuous Trajectory
CHEN Ye-gang and XU Ze-tong
Computer Science. 2015, 42 (8): 283-287. 
Abstract PDF(1015KB) ( 447 )   
References | RelatedCitation | Metrics
As the elimination of received signal strength samples in time domain is easy to cause the location update rate floating reduction and real-time problem,we first used space diversity technology in positioning stage,and built vector queue of adjacent position signal strength to calculate the strength of current position.Then we used the continuous nature of track to eliminate the location error produced by time floating,at the same time by clustering the fingerprint database,the target localization was performed only a small amount of calculation in the smaller cluster set,without the need to traverse the entire database.And the calculation does not increasesas the fingerprint database increases,which greatly reduces the amount of calculation.Finally the simulation and experiments show that the algorithm can reduce the amount of calculation and positioning error effectively.
Construction Algorithm of Fuzzy Concept Lattice Based on Constraints
CUI Fang-ting, WANG Li-ming and ZHANG Zhuo
Computer Science. 2015, 42 (8): 288-293. 
Abstract PDF(540KB) ( 468 )   
References | RelatedCitation | Metrics
The general process of constructing fuzzy concept lattice does not take user’s requirement into account.Users sometimes are not interested in all intensions of attribute sets in fuzzy concept lattice node.In order to enhance the pertinence of fuzzy concept lattice,reduce the time and space complexity and construct fuzzy concept lattice which can meet the need of users,first of all,the background knowledge which users are interested was defined as a constraint condition.We divided the constrainst into three categories:single-constraint,and-constraint and or-constraint by the relation among attributes.Then we formalized the constraint by predicate formula.And a construction algorithm based on constraints(Constrained Fuzzy Concept Lattice,CFCL) was presented.The algorithm presents a bottom-up method to compute the fuzzy concept lattice.By making use of the monotone relation between father fuzzy concept’s intent and child fuzzy concept’s intent,this method reduces the judgment operations between fuzzy concepts and the constraints.Thus the efficiency of building the fuzzy concept lattice is improved.Finally,the experiment results verify that the proposed algorithm can reduce the storage space and time of fuzzy concept lattice construction.
Speckle Projection Systems Based on GPU
HAN Lei, XU Bo, HUANG Xiang-sheng and ZHANG Yan-feng
Computer Science. 2015, 42 (8): 294-299. 
Abstract PDF(495KB) ( 1066 )   
References | RelatedCitation | Metrics
Speckle correlation algorithms can be used to estimate the depth information of the scene.However,such methods are easy to be disturbed by noises and inherent with high computational cost.This paper presented a 3D-reconstruction system based on structured light implemented on GPU.Depth was estimated from the correlation of speckle projection image.Zero-mean normalized cross correlation(ZNCC) was adopted as the correlation function.Traditional fast-ZNCC calculation method was modified to implement on GPU platforms.Experimental results show a 39x speed-up is achieved using GPU compared with the same computation cost implemented on CPU platforms.
Background Subtraction Based on Local Gradient Feature
ZHANG Xiao-jun, LIU Zhi-jing and CHEN Kun
Computer Science. 2015, 42 (8): 300-304. 
Abstract PDF(680KB) ( 419 )   
References | RelatedCitation | Metrics
The relation between accuracy and the study speed of the background model was discussed.The theoretical gradient expectation was estimated with the accurate gradient background model.Based on Gaussian model,the probability of the deviation between the actual gradient and its expectation was given,leading to a similarity measurement of the gradient feature using no texture message.The similarity was then used to adjust the threshold for binarization of the difference image,which means the fusing use of grey level message and the gradient message.Experiments show that the proposed method does have some improvement on foreground segmentation.
BSFCoS:Fast Co-saliency Detection Based on Block and Sparse Principal Feature Extraction
ZHOU Pei-yun, LI Jing, SHEN Ning-min and ZHUANG Yi
Computer Science. 2015, 42 (8): 305-309. 
Abstract PDF(1017KB) ( 371 )   
References | RelatedCitation | Metrics
With the rapid development of image acquisition technology,the original digital images are increasing and becoming more and more clear.When processing these images,the existing co-saliency detection methods need enormous computer memory along with high computational complexity.These limitations make it hard to satisfy the demand of real-time user interaction.This paper proposed a fast co-saliency detection method based on the image block method and sparse principal feature extraction method.Firstly,the image is averagely divided into several uniform blocks,and the low-level features are extracted from Lab and RGB color spaces.Then truncated power and parse principal components method are proposed to extract sparse principal features,which can remain the characteristics of the original image to the maximum extent and reduce the number of feature points and attributes.Furthermore,K-Means method is adopted to cluster the extracted sparse principal features,and calculate the three salient feature weights.Finally,the saliency map of the single image and that of multi images which are generated by feature fusion are combined to generate co-saliency map.The proposed method was tested and simulated on two benchmark datasets:Co-saliency Pairs and CMU Cornell iCoseg datasets.And the experimental results demonstrate that BSFCoS has better effectiveness and efficiency on multiple images compared with the existing co-saliency methods.
Method of Calculating Fractal Dimension for Color Images
LI Yu-rong and DUAN Jiang
Computer Science. 2015, 42 (8): 310-313. 
Abstract PDF(593KB) ( 396 )   
References | RelatedCitation | Metrics
Fractal dimension is an import metric for the description of the complexity of color images,widely used to extract characteristics of image and classify,segment or index images.Many approaches to calculate fractal dimension for grayscale images or binary images have been proposed,but very few methods are for color images.A method with simplicity and automatic computability of fractal dimension estimation was presented for color images,which extends the differential box-counting method to 5-D euclidian hyper-space.The experiments demonstrate that the proposed method is able to capture the complexity of color images,and outperforms the others in terms of identifying the roughness of color textures and the computational accuracy.
Mean Shift Object Tracking Algorithm of Adaptive Threshold Kirsch-LBP Texture Features
TANG Ji-yong, ZHONG Yuan-chang, ZHANG Xiao-chen and ZHAO Guo-long
Computer Science. 2015, 42 (8): 314-318. 
Abstract PDF(1197KB) ( 388 )   
References | RelatedCitation | Metrics
For improving the performance of the mean shift tracking algorithm based on single color feature,which is adopted to establish the target model when the light changes,a new mean shift object tracking algorithm combining a Kirsch-LBP texture feature with HSV color feature was presented.Firstly,the paper presented a novel adaptive thres-hold Kirsch-LBP feature description operator with light resistance,which uses eight direction difference of the Kirsch operator,uses the LBP template average as an adaptive threshold, and then according to the rotation invariant principle of LBP extracts local texture feature.Secondly,it used the relationship between similarity coefficient of different features as the weighted criteria to construct the new weight.Finally,it was embedded into the mean shift algorithm to realize target tracking.Experimental results show that the algorithm can improve the accuracy of target tracking effectively in the scene of light changing,and improves the performance of traditional mean shift object tracking algorithm.
Printing Defects Detection and Realization in Food Packaging Based on Image Registration
YANG Zu-bin and DAI Xiao-hong
Computer Science. 2015, 42 (8): 319-322. 
Abstract PDF(617KB) ( 534 )   
References | RelatedCitation | Metrics
There is a big difference in space between the real-time image collected by traditional printing defects detection system in food packaging and standard image.Before defects detection,the real-time image is registered with the standard image,and then image defects are detected and identified.In view of such defects with long time detection,low sorting efficiency,high missing rate and high requirements for human vision of traditional detection methods,an image registration algorithm for printing defects detection in food packaging based on image enhancement processing was proposed.And the application of improved algorithm of wavelet transformation to image edge detection has effectively solved the problems of error detection caused by noises.The experimental simulation results indicate that this system is highly stable,and reliable and can precisely detect such micro-defects as knife fuse and braces less than 0.1 mm,which realizes non-destructive detection in food packaging printings.