Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 42 Issue 10, 14 November 2018
  
Survey of Map-matching Algorithm for Intelligent Transport System
ZHOU Cheng, YUAN Jia-zheng, LIU Hong-zhe and QIU Jing
Computer Science. 2015, 42 (10): 1-6. 
Abstract PDF(530KB) ( 1202 )   
References | RelatedCitation | Metrics
Map matching is a research hotspot and difficulty in the field of intelligent transportation system.It is a kind of common and low cost method to obtain the real-time position and road information of vehicles.The paper collated and analyzed a large number of literature about map matching algorithm in recent years,which can be divided into geometric matching algorithm,the topological algorithm,the probability algorithm and advanced algorithm.This paper systematically introduced the map matching in classic literature and compared the difference of various methods and meanwhile discussed the future development trend.
Research and Perspective on Domain Adaptation Learning Algorithms
MENG Juan, HU Gu-yu, PAN Zhi-song and ZHOU Yu-huan
Computer Science. 2015, 42 (10): 7-12. 
Abstract PDF(569KB) ( 764 )   
References | RelatedCitation | Metrics
Domain adaptation learning aims to solve the learning problem of target domain by using the labeled samples of source domain.The key challenge is how to minimize the distribution distance among different domains at most and solve the change of data distribution effectively.Domain adaptation learning algorithms were summed up and classified.The characteristics of each type learning algorithm were summarized.Five typical algorithms were carefully analyzed and their performances were compared.What directions are worthy of further exploration was indicated.
Voxel Features Segmentation of Triangular Mesh Models
MA Yuan-kui and BAI Xiao-liang
Computer Science. 2015, 42 (10): 13-15. 
Abstract PDF(568KB) ( 456 )   
References | RelatedCitation | Metrics
Because the existing segmentations of models lack engineering semantics in mechanical manufacturing,the voxel features segmentation of triangular mesh models was proposed.Based on segmentation of triangular mesh models,surface type is identified for each mesh,and then the identified surface set is matched with basic voxels or typical structures represented by salient features,so that the segmentation results are classified as free surfaces,basic voxels and complex voxels.The voxel features segmentation with engineering semantics is achieved,which can make reconstruction of models easier and faster.
Speech Enhancement Based on Gain Dictionary Queries
PANG Liang, CHEN Liang, ZHANG Yi-peng and HUANG Qing-quan
Computer Science. 2015, 42 (10): 16-19. 
Abstract PDF(292KB) ( 455 )   
References | RelatedCitation | Metrics
For speech enhancement algorithm based on statistical model,different distribution models are corresponding to different gain function,due to the uncertainty of the speech signal,no distribution function can accurately model the speech and noise spectra distribution,so any kind of fixed reference models will have some errors.We presented a gain dictionary queries based speech enhancement algorithm,getting a dictionary gain through training the voice of a noise library using log-spectral distortion criterion,for which the input is the estimate value of a priori and a posteriori SNR.Finally,we used ITU-T P.826 PESQ,segmented SNR,total SNR and log-spectral distortion criterion to test the proposed algorithm,and compared this algorithm with Gaussian distribution model and Laplace distribution model.The experimental results show that the algorithm is better than the other algorithms,whether in stationary or non-stationary noisy environments,and musical noise and residual background noise can be well suppressed.
Ultrasonic Waves Based Gesture Recognition Method for Wearable Equipment
YANG Xiao-dong, CHEN Yi-qiang, YU Han-chao, LIU Jun-fa and LI Zhan-ge
Computer Science. 2015, 42 (10): 20-24. 
Abstract PDF(454KB) ( 881 )   
References | RelatedCitation | Metrics
Wearable equipment has several limitions such as the smaller shape,limited power and CPU performance,which the traditional methods of human-computer interaction based on the touch screen and the computer vision cannot deal with.Based on the Doppler effect of sound waves,we proposed a low-power robust method.The method depends on Goertzel algorithm to extract the features of soundwave’s frequency-shifted, so that the moving direction of the user’s hand can be got ,and furthermore depends on the HMMs to classify the hand gestures.Using the proposed method,we conducted a series of comparative experiments on Surface Pro,one of the Microsoft mobile terminals.The experiment results show that no matter in the quiet environment or in the noisy one,the proposed method both has quite high precision rate,lower computational complexity which can lead to lower power consuming and better rubustness.So the proposed method can meet the needs of wearable equipment development for gesture recognition.
Sinus Bradycardia Detection Method Based on Photoplethysmography for Wearable Computing
ZHAO Hai, LI Da-zhou, CHEN Xing-chi and LI Si-nan
Computer Science. 2015, 42 (10): 25-30. 
Abstract PDF(776KB) ( 1023 )   
References | RelatedCitation | Metrics
The growing interest in wearable computing during daily life has lead to many studies on unconstrained biological signal measurements.The photoplethysmography (PPG),as an extremely useful wearable sensing medical diagnostic tool,adequately creates a health care monitoring device since it can be easily measured in our bodies.In this paper,the SVM classification algorithm was used to design a sinus bradycardia detection method.The pulse wave data collection,storage and feature vectors extraction were controlled by software platform.The SVM classification algorithm was applied and a classifier was established to determine whether the current status of user’s heart is in sinus bradycardia.The optimum setting parameters were evaluated through the experimental tests.The classifier optimal parameters were identified as C=38 and g=7,whose classification accuracy rate is 94.44%.The corrected judgment rate verified by the test set is 94.18%.The proposed method provides a new application field for the wearable computing products based on photoplethysmography signal.
Ergonomic Considerations in Design of Wearable Exoskeleton to Aid Walking
QIU Jing, CHENG Hong and GUO Hao-xing
Computer Science. 2015, 42 (10): 31-34. 
Abstract PDF(339KB) ( 1094 )   
References | RelatedCitation | Metrics
Because of the growing population aging and large number of physically disable population in china,the contradiction between supply and demand of rehabilitation professionals in china is gradually growing.In order to resolve this contradiction,the PRMI exoskeleton robot was designed for hemiplegia and paraplegia patients,who lost partially or completely their motor functions of the lower extremity.The exoskeleton system provides motion compensation to its wearer to aid walking.This paper introduced the design of PRMI assist exoskeleton robot objectives.Through analysis of ergonomic consideration in design of the PRMI wearable exoskeleton,several ergonomic aspects were presented in this paper,including the adjustable range,joint range of motion,the maximum torque,the kinematics of consistency and human-computer interaction.
Research of Discriminant Method for Human Body Physiological State Based on Support Vector Machine
CHEN Xing-chi, ZHAO Hai, DOU Sheng-chang, LI Si-nan and LI Da-zhou
Computer Science. 2015, 42 (10): 35-38. 
Abstract PDF(965KB) ( 598 )   
References | RelatedCitation | Metrics
Focusing on the discriminant for human body physiological state,this paper presented that the pulse period and height of systolic peak from the time domain are extracted as the input feature vectors of support vector machine (SVM).Through a binary classification model built by the method of supervised learning,the physiological state is judged as normal state or event state.Finally,we took three experiments:movement,sleep and drink.The statistical analysis and evaluation result show that the classification performance of SVM is excellent.
Design and Implementation of Wearable ECG Signal Acquisition and Analysis System
MENG Yan, ZHENG Gang, DAI Min and ZHAO Rui
Computer Science. 2015, 42 (10): 39-42. 
Abstract PDF(857KB) ( 685 )   
References | RelatedCitation | Metrics
In traditional electrocardiogram (ECG) measurement,patient with limited activity and uncomfortable wearing is still a practical problem.The paper proposed a wearable ECG acquisition and analysis system based on the features of wearable computing.The system samples ECG signals by own designed Synchrony 12/single lead ECG acquisition device.ECG data can be saved on the device or be transmitted to server by 3G network.And ECG data can be analyzed by own designed software which can provide auxiliary information on clinical heart diagnosis,and perform ECG monitoring.Furthermore,after carefully studying interposer electrode and fabric electrode,the paper proposed a strategy to combine interposer and fabric electrode together.This was done to improve the quality of ECG signal. The experiment data of wearing the ECG acquisition device with interposer fabric electrode show that the ECG measurement procedure is comfortable,and the ECG signal quality can reach the requirement of clinical ECG monitoring.
Remote Sensing Respiration Signals
SHAN Yu-hao, CHEN Tong, WEN Wan-hui and LIU Guang-yuan
Computer Science. 2015, 42 (10): 43-44. 
Abstract PDF(497KB) ( 547 )   
References | RelatedCitation | Metrics
We presented a modified methodology for remote sensing human respiration signals using a Microsoft Kinect sensor.This method is suitable for obtaining the respiration signals when a object is sitting.According to the results of a controlled experiment,the error between breathing rate obtained from Kinect and the one obtained from contact measurement is 0.4%,and different breathing patterns can be detected through the respiration trace measured by Kinect.
Meta-model of PaaS-based Cloud Application’s Deployment Environment
LIU Huan-huan, MA Zhi-yi and CHEN Hong-jie
Computer Science. 2015, 42 (10): 45-49. 
Abstract PDF(507KB) ( 468 )   
References | RelatedCitation | Metrics
PaaS is one of the service paradigms of cloud computing,which is used to provide the application container service.Calling API and editing configuration files are the main way of cloud application deployment in PaaS,which needs a lot of learning costs and are error-prone.API and configuration files of different PaaS have different syntax,as a result,application migration on PaaS is very difficult and cross-platform or multi-platform deployment is scarcely possible.This article proposed the meta-model of PaaS-based cloud application’s deployment environment,which can lower the learning costs,make the deployment process more automated,simplify application migration,and make cross-platform or multi-platform deployment possible.
Load Balancing Strategy on MapReduce with Locality-aware
LI Hang-chen, QIN Xiao-lin and SHEN Yao
Computer Science. 2015, 42 (10): 50-56. 
Abstract PDF(643KB) ( 623 )   
References | RelatedCitation | Metrics
Intermediate data distribution characteristics and network traffic overhead are not considered in any existing research on load balancing strategy on MapReduce,resulting in additional network traffic overhead and decrease of system efficiency.To solve this problem ,this paper presented a locality-aware load balancing strategy.By taking advantage of the new features of resource management brought by YARN,the strategy can obtain the data distribution when the buffered data are written to local disk.The strategy schedules the reduce tasks according to the data distribution along with the processing speed of each node to decrease network overhead while maximizing load balancing of each node.In addition,to further improve the performance of scheduling strategy with data skew,this paper introduced the strategy of fine-grained partitioning and self-adaption fragmentation.The comparative experimental results show that the presented strategy can improve the performance effectively,and reduce the total network traffic overhead.
Hybrid Optimal Data Replica Placement Scheme Based on Mosquitoes Oviposition Mating and Simulated Annealing
ZHANG Bang, WANG Xing-wei and HUANG Min
Computer Science. 2015, 42 (10): 57-59. 
Abstract PDF(328KB) ( 434 )   
References | RelatedCitation | Metrics
The multiple data replica scheme should be adopted in cloud storage system in order to improve system scala-bility and reliability and improve user access capability at the same time.Selecting proper location for each replica and realizing the optimal allocation of user access requests to data replica should be solved.In this paper,a hybrid optimal multiple data replica placement scheme based on MOX (Mosquitoes Oviposition Mating) and SA(Simulated Annealing) was proposed.With minimizing the total cost as its optimization objective,it uses the ideas of MOX to determine the candidate data replica placement solutions and then uses SA to refine the candidates further to get the optimal solution.We simulated implementation and performance evaluation on the proposed scheme based on CloudSim and then carried out comparative analysis with certain existing scheme.Simulation results show that the proposed scheme is feasible and efficient with better performance.
Classification of Single Protocol Based on Keywords
ZHENG Jie and LI Jian-ping
Computer Science. 2015, 42 (10): 60-64. 
Abstract PDF(406KB) ( 397 )   
References | RelatedCitation | Metrics
Network protocols are sets of standards for certain network communications.The protocol identification and analysis have great significance for network management and security.Although there are all kinds of protocol identification technology,most of them are not suitable for the binary protocol identification.To address this issue,the paper proposed a novel method of protocol identification which can classify the same protocol into several messages in the environment of single protocol communication.This method utilizes n-gram to segment the data frames and then extracts the set of keywords using unsupervised feature selection algorithm.At last,it implements the identification of different type of messages using clustering algorithm.Finally the method was evaluated on ICMP.The results show that the rate of precision and recall can both reach more than 90%.
Characteristic of Satellite Ground Links and Inter-satellite Links of Space Information System Based on Route equirement
ZHONG Tao, YI Xian-qing, HOU Zhen-wei and ZHAO Yue
Computer Science. 2015, 42 (10): 65-70. 
Abstract PDF(496KB) ( 739 )   
References | RelatedCitation | Metrics
An effective routing algorithm is one of the core technologies of space information system that should be developed.Due to the changes of topology structure in satellite network,the connectivity features of the inter-satellite links(ISLs) and satellite-ground links(SGLs),and the topology structure evolution laws of satellite network should be taken into full consideration when studying the routing algorithm.This paper analyzed the architecture of satellite network of space information system and studied the geometric characteristics of ISLs and SGLs.With the simulation software STK,the characteristics of the ISLs and SGLs were simulated.By analyzing the simulation results,the connectivity features of the ISLs and SGLs were acquired and the topology structure evolution laws of satellite network were also obtained by further analyzing.The features and the laws will provide a reference for the design of satellite links and the research on the routing algorithm of space information system later.
Energy-aware Protocol Based on Distance and Probability
WANG Li-zhen, ZHANG Shu-kui, JIA Jun-cheng and WANG Jin
Computer Science. 2015, 42 (10): 71-75. 
Abstract PDF(408KB) ( 383 )   
References | RelatedCitation | Metrics
As the sensor nodes have limited battery resources,how to maximize lifetime of the network is the key consideration.We presented a new energy-aware geographic routing algorithm based on probability and distance (EPDRP),which considers both the local position information and remnant energy when choosing next hop.We evaluated the GPSR and EPDRP protocol using NS-2 simulator.Results show that EPDRP obtains lower average hop count than GPSR,reduces the routing protocol overhead and also effectively prolongs the network lifetime.
Storage Research of Small Files in Massive Education Resource
YOU Xiao-rong and CAO Sheng
Computer Science. 2015, 42 (10): 76-80. 
Abstract PDF(396KB) ( 427 )   
References | RelatedCitation | Metrics
As a distributed cloud platform,Hadoop is one of the most widely used cloud storage technology for applications with large datasets to provide reliable and efficient storage service,but it suffers a performance penalty with increased number of small files.In order to improve the efficiency of storing and accessing the small files on Hadoop,we proposed a scheme,based on the relationship of small files.In the scheme,a set of correlated files is combined into a large file to reduce the file count,indexing mechanism is used to access small file and metadata cache, and associated small file prefetching mechanism is used to improve the efficiency of file read.The experimental results indicate that the above methods can improve the storage and access efficiency of small file on Hadoop.
Energy-efficient and Reliability under White Gaussian Noise Channels of WSNs
CHEN Xue and LIU An-feng
Computer Science. 2015, 42 (10): 81-87. 
Abstract PDF(934KB) ( 442 )   
References | RelatedCitation | Metrics
The nodel optimization of wireless sensor networks can improve network performance.A cross-layer optimization method was proposed,which is based on energy consumption characteristics and the relationship between date transmission reliability and energy consumption.It can not only balance energy consumption,improve network lifetime,but also ensure data transmission reliability between nodes of wireless sensor networks under additive white Gaussian noise.First,from mathematics,we strictly gave the conditions of optimization method of optimal nodal number N*,nodal placement d* and nodal transmission structure p* under minimum total energy consumption.Then,for the fact that nodal energy consumption is higher for nodes which are near the sink and nodes which are far from the sink have remaining energy,and date transmission reliability is directly proportional to energy consumption,we conducted a cross-layer optimization strategy.For node which is near to sink,we reduced its reliability to energy consumption and increased network lifetime.For node which is far from sink,we improved its reliability to make full use of its remaining energy,so that network energy consumption is balanced and network lifetime is improved.In the end,the theoretical analysis and experimental results show that our optimal design can improve the network lifetime by 10%~90%,network utility by 20% and guarantee desire level of reliability.
Algorithm for Multisource Multicast with Network Coding over Multi-hop Wireless Networks
HAN Li, QIAN Huan-yan and LIU Hui-ting
Computer Science. 2015, 42 (10): 88-91. 
Abstract PDF(969KB) ( 403 )   
References | RelatedCitation | Metrics
We first presented a network coding based model for multisource multicast,in which the theory of back-pressure plays an important role in flow scheduling.Then we proposed a heuristic algorithm MulSrc which is compatible for 802.11 DCF MAC .It is especially well-suited for applications with low-loss,low-latency constraints.The use of network coding transparently implements both localized loss recovery and path diversity with low overhead.Simulation results show that our protocol outperforms the same kind protocol CodeCast and MMForests with multiflows.
Multi-hop Relay OFDM System Based on Subcarrier Selection Pairing and Optimal Power Allocation
FENG Liang
Computer Science. 2015, 42 (10): 92-94. 
Abstract PDF(299KB) ( 438 )   
References | RelatedCitation | Metrics
In order to improve network coverage and network capacity of OFDM cooperative communication systems,a multi-hop relay OFDM system based on subcarrier selection pairing and optimal power allocation was proposed.Performance analysis of matching pair of relay and non-matching pair of relay was performed on OFDM relay system model the sub-carrier selection paired was converted to an integer programming problem,and pairing matrix was calculated using a matrix-based planning approach based on Hungarian algorithm.Then,according to the power allocation problem of OFDM systems, under KKT conditions optimized relay power and power allocation method were used.The final simulation results show that compared to the program resource allocation with QoS statistical quality assurance and resource allocation algorithm of OFDM relay system of heterogeneous services,the proposed method improves network coverage and capacity with better results.
ACO Based Traffic Classified Routing Algorithm in Distributed Satellite Cluster Network
JIANG Nan and HE Yuan-zhi
Computer Science. 2015, 42 (10): 95-100. 
Abstract PDF(494KB) ( 391 )   
References | RelatedCitation | Metrics
The architecture of distributed satellite cluster network (DSCN) was presented and the characteristics of DSCN topology change were illustrated.On the basis of analyzing the acquisition method of network status and route calculation,we proposed an ant colony optimization based traffic classified routing (ATCR) algorithm for DSCN.In ATCR,traffic is divided into three classes,traffic class A takes minimized end to end delay as optimization target,traffic class B takes maximize throughput and traffic class C provides best-effort service.ATCR improves the shortcoming of slow convergence in ant colony optimization (ACO).Simulation results show that ATCR algorithm can improve the convergence speed and balance network traffic effectively.The end-to-end delay of traffic class A and class B is less than MACO algorithm which does not use traffic classification.ATCR has a better performance on packet delivery ratio than MACO,because ATCR reduces the number of heavy load link as well as packet loss caused by congestion.
Weighted Centroid Localization Algorithm Based on Mamdani Fuzzy Theory
WANG Wan-liang, SHI Hao and LI Yan-jun
Computer Science. 2015, 42 (10): 101-105. 
Abstract PDF(754KB) ( 496 )   
References | RelatedCitation | Metrics
In many cases of wireless sensor networks application,the accuracy of weighted centroid localization algorithm depends on the precision of weight.In this paper,Mamdani fuzzy logic inference approach with improved RSS membership function based on bat algorithm was proposed to improve the accuracy of weighted centroid localization algorithm.With applying Zigbee hardware platform to compare three types of centroid algorithms,it draws a conclusion that the desired accuracy in door localization can be achieved with optimized RSS membership function by bat algorithm.
Transmission Interference Prediction Approach in WLAN Channel
LIU Yi, YE Yuan-hang and LING Jie
Computer Science. 2015, 42 (10): 106-112. 
Abstract PDF(869KB) ( 516 )   
References | RelatedCitation | Metrics
Wireless access points which are not rationally planned and deployed in public or private area lead to overlap of basic service sets (BBS) and increase channel interference easily.Before frame transmission,how to accurately predict the channel potential interference factors and adopt coping strategy become a hotspot of WLAN communication techno-logy.Therefore,a transmission interference prediction approach in WLAN channel was proposed.During transmission,the proposed approach captures and statistics A-MPDU frames transmission status information with B-ACK frames,then calculates the probability of interference occurs,at last predicts transmission interference factors and adopts coping strategy.Simulation results show that the proposed approach can not only effectively predict channel transmission interference factors,but also increase the use of network bandwidth and improve frame transmission efficiency in a variety of topological environment.
DTN Routing Algorithm Based on Region Segmentation
HAN Jin, SHI Jin and REN Yong-jun
Computer Science. 2015, 42 (10): 113-116. 
Abstract PDF(418KB) ( 423 )   
References | RelatedCitation | Metrics
The randomly moving DTN nodes are often trapped for a period in highly connectivity regions of a undirected graph,which consists of DTN’s paths.So in the period,the messages whose receiver is in the same region with its sender in current local region should be exchanged first.Meanwhile,the messages whose receiver is in different regions with its sender nodes in current local region should be exchanged first when the nodes are leaving the message sender’s current local region.According to this strategy,a new DTN routing algorithm was presented in this paper.In the algorithm,a undirected graph is segmented into regions by random experiment method,and DTN’s messages are routed according to a node’s currently local region and the currently local region of receivers of messages.The results of experiments show that compared to PRoPHET,Epidemic,SAW,the algorithm can get relatively high message delivery successful rate and effectively reduce message copies transit times.
Peer-to-Peer Traffic identification Method Based on Chaos Particle Swarm Algorithm and Wavelet SVM
WANG Chun-zhi, ZHANG Hui-li and YE Zhi-wei
Computer Science. 2015, 42 (10): 117-121. 
Abstract PDF(405KB) ( 481 )   
References | RelatedCitation | Metrics
A novel peer-to-peer(P2P)traffic identification algorithm was proposed as the P2P traffic has the features of multi-scale and mutability.The identification algorithm is based on support vector machine (SVM) with the wavelet kernel function.Further,the disadvantages of long training time and easily falling into local minimum in the SVM parameters training methods were analyzed,and chaos particle swarm algorithm was employed to optimize the SVM parameters in order to improve the efficiency of parameters training and the identification accuracy.Finally,the real campus network traffic data were used to test the efficiency of the proposed method.The experimental results show that the proposed method has higher identification accuracy and computational efficiency compared with the support vector machine with the traditional kernel function and parameters training method.
Interference Cancellation Method Based on Space-time Code and Interference Alignment
SUN Jiang-feng and TIAN Xin-ji
Computer Science. 2015, 42 (10): 122-125. 
Abstract PDF(288KB) ( 321 )   
References | RelatedCitation | Metrics
An interference cancellation method based on space-time code and interference alignment was proposed for X channel with four antennas of each user.4×4 space-time codeword with coding rate being 2 was designed,and zero vectors were introduced into each codeword.Firstly,the unwanted codewords for each receiver are aligned by interference alignment.Then,the unwanted codewords are mitigated by linear operation on received signals.Finally,the wanted codewords are not interfering with each other by non-linear operation on received signals.The sum degrees of freedom and the diversity gain are 16/3 and 8 respectively.Simulation results show that the reliability of proposed scheme outperforms the existing scheme at the same scene.
IoT Complex Event Detection Model Based on Out-of-order Revise Framework
XU Dong-dong, YUAN Ling-yun and LI Jing
Computer Science. 2015, 42 (10): 126-131. 
Abstract PDF(602KB) ( 393 )   
References | RelatedCitation | Metrics
There are always events with out-of-order timestamps in the Internet of Things (IoT) application systems.To deal with the problem,a semantic event definition about IoT was presented and the issue of out-of-order timestamps was also described.Meanwhile,according to the mixed driving space reclaim mechanism,an out-of-order revise framework of complex events based on Hash structure was established.What’s more,a complex event detection algorithm based on out-of-order revise framework (ORFCED) was proposed.To solve the issue of out-of-order timestamps,the algorithm extracts two characteristic parameters of events to compute the Hash address and stores events into circular linked list in the timestamp order to sort them locally.Simulation results show that the proposed ORFCED algorithm not only can process events with high accuracy and reliability,but also can respond timely to out-of-order streams,which makes up for the deficiency of the existing methods.Finally,a case study was made,which verifies the effectiveness and feasibility of the proposed algorithm.
Sensitive Information Inference Method Based on Semi-supervised Document Clustering
SU Ying-bin, DU Xue-hui, XIA Chun-tao, CAO Li-feng and CHEN Hua-cheng
Computer Science. 2015, 42 (10): 132-137. 
Abstract PDF(504KB) ( 516 )   
References | RelatedCitation | Metrics
For the problem that sensitive information leakage caused by multi-document clustering and inference has the features of high risk and high concealment,a sensitive information inference method based on semi-supervised document clustering was proposed.Firstly,a new second-order constraint active learning algorithm was designed,which can ensure to obtain high quality constraints with less time by choosing the most uncertain informative data.Then,a new semi-supervised clustering algorithm combining constraints and DBSCAN was proposed,which can effectively resolve fuzzy boundaries of DBSCAN and improve the precision of document clustering.Finally,possibility measure of sensitive information on similar documents was calculated based on the results of semi-supervise clustering.The experiments show that the precision of semi-supervised clustering improves significantly,and the inference method can infer sensitive information effectively.
Mobile-agent-based Composite Data Destruction Mechanism for Cloud-P2P
XU Xiao-long, GONG Pei-pei, ZHANG Yun and BI Chao-guo
Computer Science. 2015, 42 (10): 138-146. 
Abstract PDF(1399KB) ( 491 )   
References | RelatedCitation | Metrics
Cloud-P2P combines the resources of all nodes of cloud computing and peer-to-peer computing to achieve the largest collaboration and resource sharing.The data destruction mechanism is one of the important measures to protect users’ data security and controllability,which is difficult for Cloud-P2P systems.In order to meet the requirement of data destruction in Cloud-P2P storage systems,a composite data destruction mechanism based on mobile agent was put forward,which can make the expired,waste data destructed effectively,as well as defend those malicious attacks on data.In order to effectively destruct data on one node with low cost,a novel data destruction method was proposed,which realizes the data destruction by data folding.
Collaborate Privacy Protection Based on Virtual Tracks in Position Privacy Protection
ZHAO Yun-hua, BAI Guang-wei, SHEN Hang, DI Hai-yang and LI Rui-yao
Computer Science. 2015, 42 (10): 147-153. 
Abstract PDF(806KB) ( 422 )   
References | RelatedCitation | Metrics
This paper proposed a collaborative privacy protection based on virtual tracks(VTPP),without relying on a trusted third-party agent.Users create virtual tracks and collaborate through self-organization communication.A convex polygon cloaked area is created to protect users’ location privacy, causing that the location cloaking quality and query result accuracy are improved.Our simulation results demonstrate that VTPP algorithm achieves higher cloaking success rate along with lower service response time.
Design of TCP Application Architecture for Network-oriented Information System
JIN Lei, XU Kai-yong, LI Jian-fei and CHENG Mao-cai
Computer Science. 2015, 42 (10): 154-158. 
Abstract PDF(487KB) ( 401 )   
References | RelatedCitation | Metrics
According to the application requirements of trusted computing platform in the network-oriented information system,a TCP application architecture TCPAA was proposed for the network-oriented information system.The architecture was designed by dividing it into the access authentication subsystem and the information exchange subsystem two parts.In order to enhance the flexibility of trusted computing applications in the access authentication subsystem,a trust authentication mechanism PATAM based on proof agent was proposed in this paper,and an improved access authentication mode was proposed with a detailed description of its authentication protocol and application process.Beyond that,the trusted information transmission processes inside and outside were designed in the information exchange subsystem,and an improved pyramid trusted assessment model PTAM was proposed.Finally,the test experiments verify the good performance of the architecture.The results show that the application architecture has better support ability for the application development of trusted computing platform in the network-oriented information system environment.
Secure Cloud Computing Service Protocols of Elementary Functions
LIU Xin, LI Shun-dong and CHEN Zhen-hua
Computer Science. 2015, 42 (10): 159-159. 
Abstract PDF(414KB) ( 423 )   
References | RelatedCitation | Metrics
Cloud computing has become a powerful platform to solve many problems,while it has brought a lot of secure troubles.Cloud computing of elementary functions is the foundation and core of all cloud computation.We presented secure cloud computing service protocols for elementary functions.After transforming primitive parameters into other forms,we sent the complex parts to the cloud platform to compute.Using the well-accepted simulation paradigm,we proved that the protocols are secure.In the protocols,the service receiver can solve complicated computation problems with less computation sources.The protocols have lower computation and communication overheads.Therefore,the protocols are effective and feasible,and can become the based subprotocols of cloud computing.
Collusion-free Rational Multi-secret Sharing Scheme
ZHANG En, SUN Quan-dang and LIU Ya-peng
Computer Science. 2015, 42 (10): 164-169. 
Abstract PDF(497KB) ( 670 )   
References | RelatedCitation | Metrics
A collusion-free scheme for rational multi-secret sharing was proposed.Collusive behavior and preventive measures were analyzed.The coalition-proof model and algorithm were developed to make the participants’ strategies satisfy computational coalition-proof equilibrium.The participants do not know whether the current round is a test round.Rational players can not gain more by coalition,so rational players have no incentive to collude in the protocol.In addition,the dealer doesn’t need to distribute a secret share among the participants,and the scheme assumes neither the availability of a trusted party nor multi-party computations in the secret reconstruction phase.Finally,every player can obtain multi-secret fairly.The scheme is collusion-free and avoids the inefficiency of the rational single secret sharing scheme.
Test-suite Reduction Based on MC/DC in Software Fault Localization
WANG Rui, TIAN Yu-li, ZHOU Dong-hong, LI Ning and LI Zhan-huai
Computer Science. 2015, 42 (10): 170-174. 
Abstract PDF(421KB) ( 645 )   
References | RelatedCitation | Metrics
In the process of software regression testing,frequently modifying software leads to a huge test suite which makes testing more expensive.To address this problem,researches have proposed methods about test suite reduction in consideration of statement/path coverage.However,these methods more or less affect the integrity of MC/DC coverage of the original test suite.We proposed a new approach named MCDCR based on MC/DC coverage rate.Our MCDCR method can guarantee MC/DC coverage while doing no harm to the effectiveness of fault localization and test suite reduction rate.Experiment shows that MCDCR performs better than the existing reduction methods comprehensively.
Multi-model Synthesis Prediction of Software Reliability Based on Functional Networks
WANG Er-wei and WU Qi-zong
Computer Science. 2015, 42 (10): 175-179. 
Abstract PDF(385KB) ( 466 )   
References | RelatedCitation | Metrics
The functional networks were introduced into the prediction of software reliability,and based on its better explanatory and other attributes than neural network,a multi-model synthesis prediction method of software reliability based on functional networks was proposed.The estimated values of many single models were taken as the input,and the actual value was taken as the output,thus the structure of functional networks was established.The learning algorithm of functional networks was proposed and three training strategies were designed,all of which were conducted in tests accordingly.The test results show that in the third training strategy,the multi-model prediction method based on functional networks has better predictive accuracy,and is more effective than single-model and linear integrated models proposed by Lyu.
Integration Verification of Or-split in Test Flow
FANG Qing-hua, SU Jin-hai, LING Zu-rang and HUA Dong-dong
Computer Science. 2015, 42 (10): 180-183. 
Abstract PDF(387KB) ( 359 )   
References | RelatedCitation | Metrics
Integration verification of or-split in test flow is the necessary condition to insure correctness,stability and maturity of the model.Based on the analysis of the integration in test flow,we gave the integration definition of condition-constrained set representing or-split to convert the problem from or-split to the verification of condition-constrained set.According to the Huffman tree,we constructed a tree to decide whether the or-split in test flow is integrated or not.
Compositional Type Checking of Descriptor Leaking
LI Qin and MIAO Jin
Computer Science. 2015, 42 (10): 184-188. 
Abstract PDF(421KB) ( 407 )   
References | RelatedCitation | Metrics
Programs manage files,as a kind of resource,using system calls provided by operating systems to manipulate file descriptors.The availability of the system will be significantly degenerated if programs deal with file descriptors arbitrarily.We proposed a type system to check whether a program leaks some resource (typically,file) descriptors.We defined semantics for descriptor-related operations in sequential programs,and proved that the type system is sound with respect to our semantics.In addition,the extension of this type system with concurrent semantics was discussed.
Rule-based Performance Optimization Model at Software Architecture Level
DU Xin, WANG Chun-yan, NI You-cong, YE Peng and XIAO Ru-liang
Computer Science. 2015, 42 (10): 189-192. 
Abstract PDF(346KB) ( 464 )   
References | RelatedCitation | Metrics
The use number and order of rules in the performance improvement process have not been fully considered in the most of rule-based approaches to performance improvement at software architecture level.As a result,the search space for performance improvement is limited so that the optimal solution for performance improvement is hard to find out.Aiming at the problem,this paper firstly designed a rule sequence execution framework (RSEF).Furthermore,performance improvement at software architecture level was abstracted into the mathematical model called RPOM for solving the optimal rule sequence.In the RPOM model, the mathematical relation between the usage of rules and optimal solution for performance improvement is precisely characterized.The result of this paper will support the rule-based performance improvement approaches in searching the larger space for performance improvement and improving the quality of optimization.
Static R-tree Building Method Based on Cure Clustering Algorithm
LI Song, CUI Huan-yu, ZHANG Li-ping and JING Hai-dong
Computer Science. 2015, 42 (10): 193-197. 
Abstract PDF(413KB) ( 387 )   
References | RelatedCitation | Metrics
The R tree index structure plays a great role in spatial objects query and complex spatial relations query.The traditional spatial index structure of R tree is generated dynamically.The structure of its tree is realized according to the continuous insertion algorithm. It uses the way of splitting child node to generate the root node of R tree.Dynamic gene-ration algorithm will cause low minimum utilization rate of the node of R tree.In order to make up for the inadequacy of dynamically generated R tree,a static R tree algorithm based on CURE algorithm was proposed,and the CU_RHbuilt tree building method was put forward.This algorithm can not only effectively deal with massive data,recognize clusters of any shape,reduce the overlap degree of rectangles,but also greatly reduce the computational cost as partitioning technology is adopted.The spatial utilization is rather high.The R tree node splitting method based on CURE algorithm was further proposed.Theoretical research and experiment show that the query efficiency of the proposed method is rather high.
Provenance Based Information Management Method for Microblog Messages
HUANG Qing-yu and LU Luo-xian
Computer Science. 2015, 42 (10): 198-201. 
Abstract PDF(330KB) ( 429 )   
References | RelatedCitation | Metrics
In microblog platform,users’ messages arrive the system in a temporally ordered sequence,and efficient management of microblog streaming data can handle users’ queries timely.Based on provenance of database,a provenance based information management method for microblog messages was proposed.Firstly,the provenance is defined as messages about a common event according to the generation,development and changing of an event.Secondly,the message streaming is divided into different provenances and they are maintained dynamically when a new message comes.Finally,the messages of provenance are used to answer user’s queries.The experiments show that the proposed method is efficient in memory usage and time cost,and can be used to timely response of users’ queries.
Research on SQL Energy Consumption Modeling and Optimization
GUO Bing-lei, YU Jiong, LIAO Bin and YANG De-xian
Computer Science. 2015, 42 (10): 202-207. 
Abstract PDF(600KB) ( 412 )   
References | RelatedCitation | Metrics
The increasing energy consumption of IT system makes us take energy efficiency into consideration when designing a new generation of DBMS.Because SQL queries consumes almost 70%~90% of the database resources,the energy efficiency of database can be improved by optimizing query and energy consumption modeling.After an in-depth study on optimization of query processing mechanism,an energy consumption model for SQL was proposed and many experiments were designed on a series of query optimization methods to show their effectiveness of performance improvement and energy reduction.Experiments and energy consumption data analysis prove that CPU utilization is the most critical factor that affects power consumption,SQL energy consumption optimization can ignore the memory optimization and should balance two aspects:performance optimization and power consumption optimization,and the model and the proposed methods have good application value.
Improved Apriori Algorithm Based on Bigtable and MapReduce
WEI Ling, WEI Yong-jiang and GAO Chang-yuan
Computer Science. 2015, 42 (10): 208-210. 
Abstract PDF(334KB) ( 608 )   
References | RelatedCitation | Metrics
BM-Apriori algorithm was designed for big data to address the poor efficiency problem of Apriori in mining frequent item sets.BM-Apriori takes advantages of Bigtable and MapReduce together to optimize Apriori algorithm.Compared with the improved Apriori algorithm simply based on MapReduce model,timestamp of Bigtable is utilized in this algorithm to avoid generating a large number of key/value pairs.It saves the pattern matching time and scans the database only once.Also,to obtain transaction marks automatically,transaction mark column is added to set list for computing support numbers.BM-Apriori was executed on Hadoop platform.The experimental results show that BM-Apriori has higher efficiency and scalability.
Elite Orthogonal Learning Firefly Algorithm
ZHOU Ling-yun, DING Li-xin and HE Jin-rong
Computer Science. 2015, 42 (10): 211-216. 
Abstract PDF(483KB) ( 463 )   
References | RelatedCitation | Metrics
In order to overcome the shortcomings of firefly algorithm such as slow convergence speed and low computational accuracy,an elite orthogonal learning firefly algorithm was proposed.An elite firefly was introduced to construct a guidance vector using the orthogonal learning strategy,which can preserve and discover useful information in the population best positions and direct the swarm to fly toward the global optimal region.At the same time,the method of adaptive step size was used to balance the exploration and exploitation ability of the algorithm,and the minimum attractive parameter was adopted to guarantee the attraction among the fireflies whose distance is large.We compared the proposed algorithm with standard firefly algorithm and other three improved firefly algorithms on six benchmarks,and the results show that the proposed algorithm obtains quicker convergence speed and better solution accuracy.
Method of Acquiring Event Commonsense Knowledge
WANG Ya, CHEN Long, CAO Cong, WANG Ju and CAO Cun-gen
Computer Science. 2015, 42 (10): 217-221. 
Abstract PDF(528KB) ( 624 )   
References | RelatedCitation | Metrics
The paper used semantic,grammar and commonsense knowledge of events as the standard to construct a multi-level taxonomy of events,which can be utilized to extract commonsense knowledge of these events.We used event frames to represent all the events.An event frame includes a definition of the event,relationships between the event with other events,grammar of the event,predicate representation of the event,example sentences of the event,and commonsense knowledge of the event.To demonstrate the utilities of our method,we used the transaction event as an example to illustrate the method.
Extended Observation Window for Diagnosing Discrete-event Systems with Incomplete Event Sequence Model
CHAI Rui-ya, ZHU Yi-an, LU Wei and SHI Jia-long
Computer Science. 2015, 42 (10): 222-225. 
Abstract PDF(384KB) ( 380 )   
References | RelatedCitation | Metrics
Most of traditonal approaches to fault diagnosis of discrete-event systems require a complete and accurate model of the system to be diagnosed.Aiming at this situation,we presented an approach to diagnose the incomplete event sequence model.Three aspects of the approach are adding information to the system model,dynamically extending observation window and merging the observations of the two windows.This diagnosis approach not only processes the event sequence model to expand the applicative scope,but also sloves the observation delay in a certain extent.It was tested.The result shows the approach in case of low complexity brings out expected results according to certain incomplete models.
Opposition-based Particle Swarm Optimization with Adaptive Cauchy Mutation
KANG Lan-lan, DONG Wen-yong and TIAN Jiang-sen
Computer Science. 2015, 42 (10): 226-231. 
Abstract PDF(480KB) ( 1225 )   
References | RelatedCitation | Metrics
To solve the problem of premature convergence in traditional particle swarm optimization (PSO),this paper proposed a opposition-based particle swarm optimization with adaptive Cauchy mutation.The new algorithm applies adaptive Cauchy mutation strategy (ACM) on the basis of generalized opposition-based learning method (GOBL).GOBL strategy to generate solutions can expand the search space and enhance the global explorative ability of PSO.Meanwhile,adaptive Cauchy mutation strategy was presented to disturb the current optimal particle and adaptively gain variation points in order to avoid the best particle being trapped into local optima,since this may cause search stagnation.This strategy is helpful to improve the exploitation ability of PSO and make the algorithm more smoothly fast converge to the global optimal solution.In order to further balance the global search and local explorative ability of the algorithm,this paper applied a nonlinear adaptive inertia weight.The new algorithm was compared with several opposition-based PSO on 14 benchmark functions.The experimental results show that the new algorithm greatly improves accuracy and convergence speed of solution.
Approach to Place Recommendation Based on User Check-in Behavior in Online Network
ZHOU Er-zhong, HUANG Jia-jin and XU Xin-xin
Computer Science. 2015, 42 (10): 232-234. 
Abstract PDF(315KB) ( 345 )   
References | RelatedCitation | Metrics
The location-based service for place recommendation pays more attention to the user’s personalized need.Hence,the characteristics of user trip were extracted through the links between entities in online social network,such as the social tie between users,interaction between users and places,and proximity between places.An approach to persona-lized place recommendation based on user check-in behavior in online social network was consequently proposed.The approach ranks the candidate places to meet user’s personalized need by combining the user preference to the place,influence of the place on the target user,and social recommendation from user’s friends.Experimental results show that the proposed approach is feasible and effective in a given context.
Robust Smooth Support Vector Machine
HU Jin-kou and XING Hong-jie
Computer Science. 2015, 42 (10): 235-238. 
Abstract PDF(289KB) ( 409 )   
References | RelatedCitation | Metrics
Smooth support vector machine (SSVM) is regarded as an improved model of the traditional support vector machine.SSVM utilizes the smooth technique to reformulate the quadratic programming problem of the traditional support vector machine as an unstrained optimization one.Moreover,the Newton-Armijo algorithm is used to solve the unstrained optimization problem.In the paper,on the basis of SSVM,robust smooth support vector machine (RSSVM) was proposed by utilizing M-estimator to substitute the L2-norm based regularization term of SSVM.Furthermore,the half-quadratic minimization method is used to solve the corresponding optimization problem of RSSVM.Experimental results demonstrate that the proposed method can efficiently enhance the anti-noise capability of SSVM.
Research on Tagging Biomedical Event Trigger
WEI Xiao-mei, HUANG Yu, CHEN Bo and JI Dong-hong
Computer Science. 2015, 42 (10): 239-243. 
Abstract PDF(408KB) ( 377 )   
References | RelatedCitation | Metrics
Event extraction from biomedical literature plays an important role in the knowledge mining in biomedical domain.The trigger identification is the key step in biomedical event extraction.We used rich features including lemma,context,phrase label,word cluster and learned trigger dictionary to build several kinds of CRF models.Then we chose the best model for each type of triggers to combine a hybrid model.The evaluation on the BioNLP 2009 ST data set shows that our approach achieves good performance,which lays foundation for biomedical event extraction.
Verification of Concurrent CSP Systems Based on Petri Net
LIU Yan-qing, ZHAO Ling-zhong and QIAN Jun-yan
Computer Science. 2015, 42 (10): 244-250. 
Abstract PDF(904KB) ( 447 )   
References | RelatedCitation | Metrics
Communicating sequential processes (CSP) and Petri net are two important formal methods for analyzing concurrent systems.The CSP language features high level abstraction which is useful to effectively describe the interactions between concurrent processes,but it has weaknesses in the description and analysis of the physical structure of systems.Petri net is a tool for modeling and analyzing concurrent systems in a formal and graphical form,focusing on the physical description of system structure and properties analysis.This paper combined the advantages of both CSP and Petri net.At first,we used CSP to describe concurrent systems and then translated them into Petri net to analyze the dynamic behavior of the system.Finally,we used the model checking tool TINA to analyze and verify the system properties.It is shown that the safety property of CSP processes cannot be verified with existing properties analysis tools like PAT,however,it can be analyzed when the CSP description is transformed into Petri net.This transformation effectively expands the scope of verifiable properties of concurrent systems described by CSP.
DGA Fault Diagnosis Based on CBR Method with Feature Transformation
GAO Ming-lei, ZHANG Zhong-jiang and JI Bo
Computer Science. 2015, 42 (10): 251-255. 
Abstract PDF(425KB) ( 467 )   
References | RelatedCitation | Metrics
Pearson correlation coefficient is a way to measure the linear relationship between two variables,which is widely used as CBR matching algorithm for DGA fault diagnosis.However,the traditional application has two problems:discriminating in favor of the features which have larger data range and regarding equally the contributions of all features.To address these issues,the paper proposed the log-function feature transforming method to narrow the data range to solve the discrimination problem and proposed the mean square deviation feature weighting method to distinguish the contribution levels to improve the accuracy of DGA fault diagnosis.Experimental results show that the proposed FTW_Pearson algorithm is superior to David triangle method which is popularly used in real applications,the traditional Pearson algorithm without feature transforming/feature weighting,the Bayes algorithm and the BPNN algorithm.
Collaborative Filtering Recommendation Algorithm Based on Improved Locality-sensitive Hashing
LI Hong-mei, HAO Wen-ning and CHEN Gang
Computer Science. 2015, 42 (10): 256-261. 
Abstract PDF(499KB) ( 964 )   
References | RelatedCitation | Metrics
Collaborative filtering is one of the key technologies widely applied in personalized recommendation system with great success.The critical step of collaborative filtering is to get k nearest neighbors (kNNs),which is utilized to predict user ratings and recommend.In order to improve the recommendation quality which is affected by the matter that rating data is characterized by its large scalability,high dimensionality,extreme sparsity,and the lower real-time ability by direct similarity measuring method in finding the nearest neighbors,we proposed a collaborative filtering re-commendation algorithm based on locality-sensitive hashing,and improved it.The algorithm applies locality-sensitive ha-shing technology based on p-state distribution to get lower dimensionality and index for large rating data.Then a multi-probe mechanism is utilized to improve the algorithm with great efficiency in obtaining the approximate nearest users of target user.Then,a weighted method is used to predict the user ratings,and finally perform collaborative filtering recommendation.Experiment results on typical dataset show that the proposed algorithm can overcome the limitation of high dimensionality and sparsity in some degree,and has good recommendation performance,high efficiency and less memory consumption.
Game between Mobile Operators,Bank and Third Party Payment Services Provider in Mobile Payment Market
SHUAI Qing-hong, RUI Ting-ting and HUANG Tao
Computer Science. 2015, 42 (10): 262-265. 
Abstract PDF(330KB) ( 406 )   
References | RelatedCitation | Metrics
Along with the rapid development of the mobile Internet and mobile payment market,mobile payment business has become the key of the development of the mobile value-added business.In order to obtain the biggest profits,the main participants of mobile payment market have begun to compete for market share.Considering the mobile operators,bank,the third party payment services provider,and combining with practice,this paper set up Cournot models under complete and incomplete information of mobile market.Through solving and analyzing the models,the operation mode suitable for Chinese national conditions of mobile payment market was put forward,which means that mobile operators collaborate with bank and the third-party service providers offer support.
Cluster Pattern Based RDF Data Clustering Method
YUAN Liu and ZHANG Long-bo
Computer Science. 2015, 42 (10): 266-270. 
Abstract PDF(772KB) ( 525 )   
References | RelatedCitation | Metrics
How to manage and exploit the large mount of RDF dataset availably has become a vital issue in Web data management field.In order to partition the large scale RDF dataset for efficient data processing,clustering is usually adopted.The related researches tend to use classical clustering methods,and neglect the structure features of RDF triples.This paper analyzed the RDF clustering results intensively,and defined three types of cluster patterns.Based on the cluster patterns,a novel RDF data clustering strategy was proposed.By redescribing the RDF dataset,the cluster patterns can be generated automatically.The experiments on different test benches prove the accuracy and efficiency of the new method.
Rough Set Models for Incomplete XML Information System
YIN Li-feng and DENG Wu
Computer Science. 2015, 42 (10): 271-274. 
Abstract PDF(292KB) ( 421 )   
References | RelatedCitation | Metrics
The management technology of uncertain XML database becomes today’s research focus with XML being the standards of information representation and data exchange on the Internet and uncertain data existing in various fields.Firstly,the leaf nodes’ information value of XML document being lost or missing null values was allowed and the incomplete XML information system was proposed.Secondly,the definitions of node tolerance relation,limited tolerance relation and threshold tolerance relation were given,and three kinds of rough set models for incomplete XML information system were defined respectively based on rough set theory.Finally,the analysis of examples shows that limited tolerance relation can overcome the disadvantage of tolerance relation rough classification,and threshold tolerance relation can achieve better classification results through reasonable threshold setting,so as to enhance the prediction and the classification accuracy of XML data.
Information Retrieval Model for Domain-specific Structural Documents and its Application in Agricultural Disease Prescription Retrieval
LIU Tong and NI Wei-jian
Computer Science. 2015, 42 (10): 275-280. 
Abstract PDF(881KB) ( 387 )   
References | RelatedCitation | Metrics
Different from plain text,professional documents in various domains are mostly a type of structural document which is composed of several roughly fixed textual fields and embeds rich domain knowledge.To incorporate the inhe-rent structure information and domain knowledge,we proposed a novel retrieval model for professional documents based on structural retrieval.In particular,we first derived a domain model from a given professional document collection,and then used it as a basis to design a domain-specific structural retrieval function.We applied the proposed structural retrieval model to agricultural disease prescriptions,i.e.,a representative type of professional document in agriculture,and developed a prototype search engine for agricultural disease prescription.The experimental results on a real prescription collection show advantages of the proposed model to conventional information retrieval approaches.
Weighted KNN Data Classification Algorithm Based on Rough Set
LIU Ji-yu, WANG Qiang, LUO Zhao-hui, SONG Hao and ZHANG Lv-yun
Computer Science. 2015, 42 (10): 281-286. 
Abstract PDF(464KB) ( 432 )   
References | RelatedCitation | Metrics
Rough set is one of the basic methods in dealing with the imprecise or indefinite problems.For its advantages that the priori knowledge about analyzing dataset isn’t necessary and the parameters analysis needn’t to be set artificially,rough set is widely used in pattern recognition and data mining fields.For rough set theory,a core problem is how to classify the sample which has never been met in the process of training.This problem was discussed in detail in this paper.According to the importance of the condition attributes,a weighted KNN algorithm was proposed to classify the samples which can’t precisely match to decision rules,and the contrast test with the weighted minimum distance (WMD) method was made to show the efficiency of our algorithm.At the same time,the existing algorithms about the attribute value reduction in rough set were analyzed and another point of view was put forward. The experiments on several UCI data sets and comparison with various existing algorithms proposed recently show that our algorithm is superior to these algorithms in overall effect.
Real-time Tracking Algorithm for Fast Target Based on Dynamical Scanning Boxes
ZHENG Yuan-li and HU Zhi-kun
Computer Science. 2015, 42 (10): 287-291. 
Abstract PDF(926KB) ( 610 )   
References | RelatedCitation | Metrics
TLD algorithm is a long-term tracking algorithm for single target.And it has drawn wide attention recently.It can recognize target even the target that has been lost.However,its real-time performance is not good because of a large number of scanning boxes.We proposed a method which can generate scanning boxes dynamically.This method can reduce the calculation time efficiently and thus make TLD suit real-time situation.Experiment were conducted to compare the performance of the improved algorithm,original algorithm,Camshift and CT(Compress Tracking) algorithms.The experiment results show that when they are applied to real-time camera,the improved algorithm has faster tracking speed and higher accuracy.When they are applied on picture sequences,the speed and accuracy of the improved algorithm are better than other algorithms.
Multi-characters Semantic Motion Synthesis Based on Deformable Motion Model
WANG Xin, CHEN Qiu-di, LIANG Chao-kai and WANG Wan-liang
Computer Science. 2015, 42 (10): 292-296. 
Abstract PDF(1467KB) ( 417 )   
References | RelatedCitation | Metrics
Intelligent crowd motion can make the virtual environment seem realistic.Aiming at the problem of high dimension and poor controllability of crowd motion data,a deformable motion model for multiple characters was proposed.It decomposes the crowd motion data into geometry and time variations,uses PCA to reduce dimension,and then constructs a low-dimensional semantic space of multi-characters deformable motion model.The experiments show that the proposed method can adjust the parameters according to the semantic requirements for semantic multi-characters motion analysis and synthesis.
Parallel Computation Method of Image Features Based on GPU
ZHANG Jie, CHAI Zhi-lei and YU Jin
Computer Science. 2015, 42 (10): 297-300. 
Abstract PDF(678KB) ( 588 )   
References | RelatedCitation | Metrics
Feature extraction and description are the foundation for many computer vision applications.Due to its high dimensional computation of pixel-wise processing,feature extraction and description are computationally intensive with poor real-time performance.Thus it is hard to be used in real-world applications.In this paper,the common computational modules used in feature extraction and description,pyramidal scheme and gradient computation were studied.The method used to compute these modules in parallel based on NVIDIA GPU/CUDA was introduced.Furthermore,computational efficiency was improved by optimizing memory accessing mechanism for global,texture and shared memory.Experimental results show that a 30x speed-up is obtained by GPU-based pyramidal scheme and gradient computation against that of CPU.By employing these GPU-based optimization techniques into HOG (Histogram of Gradient) implementation based on GPU,it obtains a 40x speed-up against that of CPU.The method proposed in this paper is of significance for implementing fast feature extraction and description based on GPU.
Fast Face Alignment Method Based on Sparse Cascade Regression and its Application on Mobile Devices
DENG Jian-kang, YANG Jing, SUN Yu-bao and LIU Qing-shan
Computer Science. 2015, 42 (10): 301-305. 
Abstract PDF(1027KB) ( 422 )   
References | RelatedCitation | Metrics
Efficient face alignment is the key problem for the face applications on the mobile platform which has limited computing and storage capacity.We studied the problem of fast face alignment on the mobile platform.To reduce the computing and storage requirements for face alignment,sparse constrained cascade regression model was proposed in this paper.Sparse constraint was introduced to learn the regression matrix,which can not only select the robust features,but also compress the model size to about 5% compared to the original model.We further constructed the fast face alignment algorithm on mobile platform based on sparse cascade regression model.First,the facial landmarks on the tip of the nose,the corners of the mouth and eyes are quickly located by binary features after face detection,and face pose is estimated.Face image is rotated to frontal view according to the face pose.Then,the corresponding model (frontal model or profile model) is selected according to the face pose,and cascade regression with sparse constraint is used to face alignment.Extensive experiments show that the alignment method proposed in this paper is effective and efficient with compact model size.On the Samsung smart phone of Note3,the alignment time for each face image is about 10ms,and the size of whole apk is only 4MB,which is suitable for face applications on mobile platform.
Image Fusion Method Based on Best Seam-line for Serial Remote Sensing Images Mosaic
QIN Xu-jia, WANG Qi, WANG Hui-ling, ZHENG Hong-bo and CHEN Sheng-nan
Computer Science. 2015, 42 (10): 306-310. 
Abstract PDF(1190KB) ( 509 )   
References | RelatedCitation | Metrics
Pixels weighted fusion in overlapped region is a key technology in image fusion processing,but ghosting will appear.In this paper,an improved algorithm for generating optimal seam-line and fusion method along the best seam-line for image fusion were presented and applied to mosaic and fusion for serial remote sensing images.Firstly,the edge weights are set in the overlapping area of the images.Then the maximum flow minimum cut is calculated,and eventually the best seam-line in images overlapping area is got.In the calculation of the weights of edges,the image gradient information is introduced,and the seam-line is more accurate.In image fusion processing,along the best seam-line a strip-shaped fusion area is generated,and the image in the strip-shaped fusion area of both sides of the seam-line is transited using gradated in and out method to make mosaic images more real.In serial remote sensing images mosaic,the bundle adjustment algorithm is used to adjust the parameters of images mosaic to achieve the global error minimization.Experimental results show that this method can effectively eliminate ghosting and obtain accurate image mosaic and fusion,and can get good results for serial remote sensing images mosaic and fusion.
 Anti-noise BCFCM Algorithm for Brain MRI Segmentation
LUAN Fang-jun, ZHOU Jia-peng and ZENG Zi-ming
Computer Science. 2015, 42 (10): 311-315. 
Abstract PDF(1311KB) ( 429 )   
References | RelatedCitation | Metrics
Magnetic resonance imaging (MRI) of brain is an important tool for the clinical diagnosis of brain diseases.The accurate segmentation for brain tissues is one of the important parts.However,it is difficult to acquire the accurate segmentation results because of the noise and intensity inhomogeneities in MRI.Among the MRI segmentation me-thods,Bias-Corrected FCM (BCFCM) algorithm based on Fuzzy C-Means (FCM) algorithm utilizes the spatial information and estimation of intensity inhomogeneities which can deal with the problem caused by intensity inhomogeneities.Because the BCFCM algorithm fails to consider the high level noise when estimating intensity inhomogeneities,the segmentation results are not accurate enough.For the MRI of brain tissue segmentation,this paper proposed a fast segmentation method to remove the brain skull and its appendages during the image preprocessing.In addition,we proposed an improved algorithm based on the BCFCM algorithm.The improved BCFCM algorithm can automatically change the size of window in the objective function by estimating the noise level in the iterative processing.Besides,the Gaussian kernel in the object function was utilized to smooth the intensity inhomogeneities,and the estimation value of intensity inhomogeneities was limited by using an experimental threshold which can effectively avoid the incorrect estimation of intensity inhomogeneities in the segmentation results.The experimental results show that the proposed algorithm can not only effectively and accurately segment the brain tissues,but also deal with high level noise and intensity inhomogeneities.
Fire Image Detection Based on LBP and GBP Features with Multi-scales
LU Ying, WANG Hui-qin, CHAI Qian and QIN Li-ke
Computer Science. 2015, 42 (10): 316-320. 
Abstract PDF(940KB) ( 653 )   
References | RelatedCitation | Metrics
In order to improve the fire detection rate based on video monitoring in large-span space buildings,this paper proposed a fire recognition method based on LBP and GBP features with multi-scales.Series flame of fire images were preprocessed in RGB space at first,and the flame candidate areas were located by the stroboscopic feature.We established the Gaussian difference scale space for fire images,and then LBP-feature and GBP-feature with different scales were extracted from these candidate areas.Finally,these features were put into SVM classification to recognize whether it is a flame.Experimental results show that the combination of LBP and GBP is invariant to uneven illumination,and improves the accurate of recognition flame.
Real-time Detection and Recognition for Large Numbers of Less-texture Objects
TAO Jun, LIU Jian-ming, WANG Ming-wen and WAN Jian-yi
Computer Science. 2015, 42 (10): 321-324. 
Abstract PDF(828KB) ( 356 )   
References | RelatedCitation | Metrics
The existing objects detection methods can not achieve real-time detection and identification when the object classes are too many.To solve the problem,a real-time detection and recognition algorithm on many classes and texture-less objects was put forward.The new algorithm is based on Objectness and gradient direction template.Firstly,it eva-luates the potential objects by computing its Objectness value,which can decrease many matching windows.Then in the area where the objects may appear,it detects and recognizes the texture-less objects in many classes using the template matching method based on the main direction of template and lookup table.The robustness of this algorithm to the texture-less object is better.And the algorithm is orientation independent in the process of matching.