Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 41 Issue 4, 14 November 2018
  
Survey of Mobile Sensing
XIONG Ying,SHI Dian-xi,DING Bo and DENG Lu
Computer Science. 2014, 41 (4): 1-8. 
Abstract PDF(835KB) ( 1416 )   
References | RelatedCitation | Metrics
Maturity of the mobile Internet and popularity of intelligent terminals equipped with various sensors have spawned a whole new field of research-mobile sensing.Mobile sensing is human-centric,in which human plays an important role.Since the concept of mobile sensing has been proposed,it has got academia and industry wide attention,which is also widely used in the field of transport,medical care and health,and many other fields as well as all aspects of our life.Above all,the paper expounded the connotations of mobile sensing from the perspective of the concept,sensing paradigms,sensing scale and the characteristic,upon this,classified the mobile sensing applications,and then analyzed,summarized and compared existing typical applications.Further more,it described mobile sensing trending to scale and systematic direction and service-oriented “terminal+cloud” mobile sensing architecture,and finally,focused on clarifying the new challenges and the strategies in the environment of large scale.
Survey on Entity Resolution
TAN Ming-chao,DIAO Xing-chun and CAO Jian-jun
Computer Science. 2014, 41 (4): 9-12. 
Abstract PDF(485KB) ( 387 )   
References | RelatedCitation | Metrics
Entity resolution (ER) is one of central issues of data integration and information retrieval.Its purpose is to find all real-world entities from a given dataset,and to cluster the references that refer to the same entity.ER process was partitioned into blocking step,records comparison step and matching decision step.The methods of blocking were summarized according to the way that records are clustered together,and the methods of record pair comparison are surveyed according to the size that strings are grained,and the decision models were introduced according to the way that records associate with each other.At last,the future research issues were discussed.
Research Progress in Matrix Completion Algorithms
SHI Jia-rong,ZHENG Xiu-yun and ZHOU Shui-sheng
Computer Science. 2014, 41 (4): 13-20. 
Abstract PDF(1413KB) ( 3324 )   
References | RelatedCitation | Metrics
As an important development of compressed sensing theory,matrix completion and recovery has been a new and remarkable technique for signal and image processing.This paper made a survey on the latest research progress in matrix completion algorithms.Firstly,it analyzed several main algorithms to nuclear norm minimization model,and elabo-rated their iterative procedure and principle.Secondly,it discussed low-rank matrix factorization model of matrix completion and listed the corresponding new algorithms emerged in recent years.Then it complemented other versions derived from the above two models and pointed out the solving methods.In numerical experiments,performance comparisons were made on the main algorithms to matrix completion.Finally,it gave future research direction and focus for matrix completion algorithms.
Architecture Reliability Analysis Based on Extended Influence Diagram
ZHANG Meng-meng,LUO Ai-min,YANG Wei-sheng and ZHANG Xiao-kang
Computer Science. 2014, 41 (4): 21-23. 
Abstract PDF(1059KB) ( 383 )   
References | RelatedCitation | Metrics
Extended influence diagram is a graph model to solve complex decision problems based on uncertain information indicates,which extends from influence diagram.Architecture reliability analysis is one of the main contents for the quality measurement of architecture.This paper introduced the basic idea of extended influence diagram,developed extended influence diagram of architecture reliability with the defense graph,then combining with architecture data model,evaluated the utility value of the target node finally.The experimental result suggests the feasibility of architecture reliability analysis with extended influence diagram.
Speed-up Robust Feature Image Registration Algorithm Based on CUDA
LIU Jin-shuo,ZENG Qiu-mei,ZOU Bin,JIANG Zhuang-yi and DENG Juan
Computer Science. 2014, 41 (4): 24-27. 
Abstract PDF(2121KB) ( 657 )   
References | RelatedCitation | Metrics
This paper proposed a speed-up robust feature image registration algorithm based on Compute Unified Device Architecture (CUDA).We analyzed the parallelism of SURF algorithm,and optimized it by CUDA from thread mapping and memory model of the Graphics Processing Unit (GPU),aiming at the five steps of SURF algorithm including building scale space,extracting feature points,determining key direction of feature points,generating vector descriptor and feature matching.The experimental results show that the GPU implementation achieves 33times faster than the CPU implementation while processing an image of 30MB.The GPU implementation extends the application of SURF algorithm in fast processing of remote sensing image,especially in the quick image registration.
Research on Incentive Mechanism Based on Social Norms and Boycott
LIAO Xin-kao and WANG Li-sheng
Computer Science. 2014, 41 (4): 28-30. 
Abstract PDF(333KB) ( 376 )   
References | RelatedCitation | Metrics
Because of the nodes inherent rationality,P2P network system will generate the free-riding behavior that will cause the conflict between personal interests and system performance.This kind of behavior will greatly reduce the network efficiency and utility.A set of social norm model applied in P2P networkwas established based on the social norm in combination with the ideology of boycott.This mode can punish the nodes that have violated the social norm and stimulate nodes to select cooperation strategy,generate the condition of the balance of cooperation strategy through game revenue analysis.The results of the simulation experiment show that this model can both promote the node colla-boration effectively and inhibit the free-riding behavior.
Study on Markov Predictive M-LWDF Scheduling Algorithm in M-WiMAX
HU Yong-dong,WU Guo-xin and XU Yi-qing
Computer Science. 2014, 41 (4): 31-35. 
Abstract PDF(473KB) ( 547 )   
References | RelatedCitation | Metrics
In the M-WiMAX system,in order to obtain a multi-user diversity gain and better use Adaptive Modulation and Coding (AMC),a predictive scheduling algorithm (Pre-LWDF) was designed with M-LWDF to schedule real-time traffic of WiMAX.M-LWDF algorithm uses instantaneous rate as a judgment parameter at time of the scheduling,therefore,the stability and overall performance of the scheduling algorithm are affected,especially in the mobile wireless communication.Markov prediction model was employed to calculate the instantaneous rate of the next scheduling time,and then smooth out the current instantaneous rate,thus the effect of instantaneous rate on scheduling performance was reduced.The smoothed instantaneous rate can better represent the transmission rate trends of the mobile wireless channel.Simulation results show that the scheduling algorithm insures QoS for rtPS and ertPS traffic and improves the system’s average throughput and fairness.
Service Level Agreement Based Allocation of Cloud Resources
FENG Guo-fu,TANG Ming-wei,LIU Lin-yuan and HAN Bing-qing
Computer Science. 2014, 41 (4): 36-39. 
Abstract PDF(417KB) ( 483 )   
References | RelatedCitation | Metrics
Cloud computing is different from the traditional computing models such as grid computing and cluster computing,and provides a practical business model for customers to use remote resource.It is natural for cloud providers to maximize their revenue through allocating the pooled computing resources dynamically.It is required to transform the customer-oriented service level metrics into system-oriented operating level metrics,and control the cloud resources adaptively based on Service Level Agreement (SLA).This paper addressed the problem of maximizing the provider’s revenue through SLA-based dynamic resource allocation among the differentiated customers.We formalized the resource allocation problem with queuing theory and proposed the solutions,in which many factors such as pricing mechanisms,arrival rates,service rates and available resources are considered.The experimental results show that our algorithms outperform related work.
Channel Allocation Algorithm in MF-TDMA GEO Satellite System
XU Qi-le,CHEN Jian-zhou and LIU Li-xiang
Computer Science. 2014, 41 (4): 40-43. 
Abstract PDF(339KB) ( 1039 )   
References | RelatedCitation | Metrics
A channel allocation algorithm named C-BFD in MF-TDMA GEO satellite system was proposed.Comparing with the traditional channel allocation algorithms,C-BFD uses preferential combinition and allocates channel for users based on the characteristics of the channel structure and requested channel sizes.It reduces the channel fragmentation by combining users’ requests which can fill up a complete carrier with the same frequency.The simulation result shows that this algorithm can effectively bring down the system block rate and channel fragmentation.
Multi-centre Addressing Service Discovery Algorithm with Cell-based Network Partition for WSANs
DU Jing-lin,ZHENG Ruo-qin and XIE Li
Computer Science. 2014, 41 (4): 44-48. 
Abstract PDF(389KB) ( 457 )   
References | RelatedCitation | Metrics
A multi-centre addressing algorithm MASD with cell-based network partition was presented to solve the service discovery problem in WSANs.With the comprehensive consideration of the communication cost and per-node storage load,a multi-centre addressing scheme which uses no global computation and has lower computation was designed.Sensor node can find the next hop to the nearby actor through address or local searching.The simulated experiment results show that the algorithm has shorter search distance and more little communication cost compared to the imesh algorithm.
Research on BBV Mode with Limited Node Strength Based on Common Neighbors
LU Peng,ZHANG Shan-shan and GAO Qing-yi
Computer Science. 2014, 41 (4): 49-52. 
Abstract PDF(394KB) ( 404 )   
References | RelatedCitation | Metrics
Networks with similar global structural characteristics such as small-world and scale-free may have different local structural features.Evolution of specific actual network is subject to its local structural characteristics influence.Considering the common neighbor drive of nodes,a new actual weighted network evolution model CNL was proposed based on the group degrees and node weight limited network model.Through empirical study of e-mail network,we found that the size of the network generated by CNL model is consistent with typical factual situations,and is able to reconstruct power-law degree distribution observed by above empirical research work.CNL reveal lots of mechanisms of real network evolution,and it also can be widely used in anglicizing real network evolution.
Improved LMMSE Channel Estimation Algorithm
LIAN Zhu-xian,YU Jiang and XU Li-min
Computer Science. 2014, 41 (4): 53-56. 
Abstract PDF(331KB) ( 760 )   
References | RelatedCitation | Metrics
LS algorithm is simple and vulnerable to the interference of noise,but it does not need to know the channel statistic knowledge in advance.Through the research of time-domain characteristics of channel,an improved LMMSE channel estimation algorithm was proposed.Compared with traditional LMMSE channel estimation algorithm,the improved LMMSE algorithm does not need to know channel statistical information in advance,whose performance is superior to the LS algorithm.And aiming at MSE and BER performance,its performance was simulated.Theoretical analysis and simulation results also show that performance of the improved LMMSE algorithm is better than that of the LS algorithm,and close to the traditional LMMSE algorithm.
PSO-based Video Sensor Networks Coverage Optimization Algorithm
JIANG Peng,JIN Wei-dong,QIN Na,TANG Peng and ZHOU Yan
Computer Science. 2014, 41 (4): 57-61. 
Abstract PDF(2802KB) ( 484 )   
References | RelatedCitation | Metrics
To improve the video sensor network coverage,a novel particle swam optimization (PSO) based camera network planning algorithm was proposed according to the directional features of Pan-Tilt-Zoom camera.The virtual forces are used to simulate the planning location correlation among cameras.Then the virtual force factor is added as a speed-up parameter to guide PSO to maximize coverage.This virtual force factor based PSO can reduce the PSO optimization time and improve the coverage without back and forth adjustment.A serial of simulated experimental results show our method is able to achieve higher coverage rate than conventional methods in complex scene.
Microblogging Retweet Prediction Algorithm Based on Random Forest
LUO Zhi-lin,CHEN Ting and CAI Wan-dong
Computer Science. 2014, 41 (4): 62-64. 
Abstract PDF(325KB) ( 465 )   
References | RelatedCitation | Metrics
Retweet is an important information diffusion mechanism in Microblogging,in which users can forward their followers’ blogging and share it to their fans,and it can quickly diffuse.This paper studied Microblogging retweet prediction.First we analyzed and extracted the important features,such as micro network structure,weight ratio,user profiles,and then proposed a new prediction algorithm based on random forest (RFMR).The experiment shows that,compared to other classifications,RFMR has better performance.It can effectively predict user retweet behavior.
Model of Autonomous Management for Survivable System
SU Yan,ZHAO Guo-sheng,WANG Jian,ZHANG Nan and LI Lin
Computer Science. 2014, 41 (4): 65-69. 
Abstract PDF(469KB) ( 393 )   
References | RelatedCitation | Metrics
This paper presented a model of autonomous management for survivable system based on the survivable detection parameters.Autonomous management mechanism is implemented by autonomous detection and control unit.Firstly,a number of survivable detection parameters were defined,and according to the cumulative distribution function,the dynamic and variable threshold constraints were determined.Then,based on the reference mark of parameters,the calculation methods of comprehensive evaluation for service connections were given,and autonomous management of survivable system was achieved through the control unit.Finally,the simulation experiment shows that the proposed method can effectively improve the service capacity of survivable system and enhance the survivability of system.
Research on Routing Problem of Inter-domain Composed Service Based on Overlay Network
ZHANG Yan-mei and CAO Huai-hu
Computer Science. 2014, 41 (4): 70-74. 
Abstract PDF(444KB) ( 480 )   
References | RelatedCitation | Metrics
The routing problem of inter-domain composed service based on overlay network was deeply researched.Since the strategy-routing policy affects a lot on the inter-domain routing,the multi-goals optimal model with multi-constraints was built on the inter-domain composed service routing.The layered algorithm was adopted to solve the function constraint of optimal model,and then the improved ants algorithm was employed to solve this problem in the la-yered model.The simulation shows the non-dominants solutions are evenly distributing,which means the algorithms perform well and the inter-domain routing method based on overlay network is feasible.
Research on Network Traffic Prediction Scheme Based on Autoregressive Moving Average
ZHOU Qiang and PENG Hui
Computer Science. 2014, 41 (4): 75-79. 
Abstract PDF(433KB) ( 385 )   
References | RelatedCitation | Metrics
Detecting intrusion attacks accurately and rapidly in wireless networks is one of the most challenging security problems.Various types of intrusion attacks can be detected by the change in traffic flow that they induce.We proposed an intrusion detection system for WIA-PA networks.After modeling and analyzing traffic flow data by time-sequence techniques,we proposed a data traffic prediction model based on autoregressive moving average (ARMA) using the time series data.The model can quickly and precisely predict network traffic.We initialized the model with data traffic mea-surements taken by a 16-channel analyzer.Test results show that our scheme can effectively detect intrusion attacks,improve the overall network performance,and prolong the network lifetime.
Load Balancing-oriented Autonomous Live Migration Framework for Virtual Machine
SUN Dong-dong,LIU Qing and WU Yi-ni
Computer Science. 2014, 41 (4): 80-85. 
Abstract PDF(609KB) ( 533 )   
References | RelatedCitation | Metrics
This paper proposed an autonomous dynamic virtual machine migration framework for load balancing based on the idea of ant colony algorithm.The framework does not need central management module so that servers can achieve autonomous virtual machines migration,avoiding the single point of failure.The migration mechanism of the framework is achieved making use of intelligent ants and the ants’ searching radius can be adjusted automatically according to the load of the system using fuzzy logic reasoning in order to improve the searching performance.At last,this paper extended the cloud computing simulative platform CloudSim to achieve the autonomous virtual machine migration framework proposed in this paper.Experiments were carried out in the extended platform to verify the feasibility of this framework.The parameter of framework was setted,and the excellent load banlancing ability of this framework was demonstrated by analyzing and comparing the results of simulation.
Prediction of Network Traffic Based on Traffic Characteristics
ZHANG Feng-li,ZHAO Yong-liang,WANG Dan and WANG Hao
Computer Science. 2014, 41 (4): 86-89. 
Abstract PDF(397KB) ( 585 )   
References | RelatedCitation | Metrics
The traditional model such as nonlinear model could not adapt to the model of network traffic.So only considering these characteristics can researchers model the network traffic accurately.By combining the analyses on self-similarity,length distribution and period of network traffic,making use of the wavelet transform and time series model to predict the traffic,and finally,comparing the length distribution and periodic,we can know whether the prediction result is reasonable.Firstly,the characteristics of network traffic such as self-similar and stationary were analyzed.Secondly,based on the result of the first step,the model was constructed and prediction results were obtained through selecting wavelet transform and time series.Finally,taking advantages of the length distribution and period,the model’s flexibility and accuracy were verfied.Through some experiments,it is proved that our model can reduce some computing compared with w-farima model and reflect the short-dependence and long-dependence of network traffic.
Spatial Cloaking Algorithm Based on Grid Expansion in P2P Mode
WANG Jia-hui and CHENG Jiu-jun
Computer Science. 2014, 41 (4): 90-94. 
Abstract PDF(435KB) ( 409 )   
References | RelatedCitation | Metrics
Location k-anonymity becomes a research focus in the privacy-preserving field of location-based service recently.Typical spatial cloaking algorithms require a centralized trusted anonymity server which could be the system bottleneck and single point of attacks,while the existing spatial cloaking algorithms in P2P (Peer to Peer) mode suffer from several attack models.A spatial cloaking algorithm based on grid expansion in P2P mode was proposed to solve this problem.It divides the space into grids and computes cloaking region by keeping trying to double the grid’s width until user’s privacy requirement is satisfied.Meanwhile the intermediate result is shared and cached with other peers during the running process of the algorithm.The experimental results show that the proposed algorithm reduces the consumption of network bandwidth and time cost of spatial cloaking.Moreover,it is free from center-of-cloak attack and more resistant to sample query attack in comparison with the existing algorithms.
QR-TCM:A Privacy Protection Model with Quality Guarantee for Location-based Services
HU Wen-ling and WANG Yong-li
Computer Science. 2014, 41 (4): 95-98. 
Abstract PDF(291KB) ( 367 )   
References | RelatedCitation | Metrics
To solve the problem that the traditional location services anonymity model takes a lot of time to produce anonymous region,the quasi real-time cloak model (QR-TCM) was established.The model proposes a privacy protection method called clock rotation cloak algorithm (CRCA).After comprehensive analysis of the reasons that influence the anonymity,a model that can solve the users’ service delayed and the method of measuring the quality of service were proposed.The experiment uses the standard data sets,and measures the QR-TCM model with multiple dimensions such as the response time and the degree of privacy.The experiment results confirm that the method is suitable for continuous query location privacy protection,and can effectively protect the user’s privacy and offer fast service.
Design of New Authentication Schemes for Mobile Platform
HU Wei,ZHANG Huan-guo,WEI Guo-heng and ZHOU Xue-guang
Computer Science. 2014, 41 (4): 99-102. 
Abstract PDF(1765KB) ( 428 )   
References | RelatedCitation | Metrics
Various types of mobile platforms such as smart phones,tablet computer,and embedded system have rapid popularized and penetrated into all aspects of life and work.The mobile platform application brings rich,colorful and convenient life,also brings many new security problems.Identity anthentication and access authentication are the first barrier protecting mobile platform.Combining with the muti-touch technology,gravity sensing technology and graphical password,several authentication schemes were designed and developed for mobile platforms,including drawing curve anthentication scheme,image choose authentication scheme,multiple point authentication scheme and gravity sensing authentication scheme.The setting and verification of password are convinient using the designed schemes,which can produce larger key space through a simple operation and possess higher security than simple password anthentication.
3D Object Signature Algorithm Based on Spin Image and ICP Algorithm
MO Ke-ming,REN Yi,WEI Chun-zi and ZHOU Fu-cai
Computer Science. 2014, 41 (4): 103-106. 
Abstract PDF(1100KB) ( 386 )   
References | RelatedCitation | Metrics
The existing methods for verifying 3D object authenticity have some problems that the identity of 3D object is not bound to itself and difficult to verify.According to the problems,this paper proposed a signature algorithm suitable for 3D physical object,provided the design idea,signature generation and signature verification algorithm of 3D object signature algorithm.The spin image and ICP algorithm are used to match the surface of signature model and the testing model relatively in the coarse matching and fine matching phase thereby improving the accuracy of verification.The si-mulation for the designed 3D signature algorithm was carried out.The obtained iterate error and match rate show that the algorithm can differentiate between the original and counterfeit model,thereby demonstrating the validity of algorithm.
Assessment and Validation of Network Security Situation Based on DS and Knowledge Fusion
TANG Cheng-hua,TANG Shen-sheng and QIANG Bao-hua
Computer Science. 2014, 41 (4): 107-110. 
Abstract PDF(397KB) ( 368 )   
References | RelatedCitation | Metrics
Network security situation assessment process has a large number of complex and uncertain influence factors.Aiming at the problem of lack of correctness and rationality in situation assessment,the situation index identification space and evaluation criteria based on the DS evidential theory were set up.The network situation assessment method based on DS and Knowledge Fusion was proposed through expert knowledge fusion reasoning,and the situation assessment case analysis was verified,combining the computation of three-layer network host vulnerability index.Experimental results show that the method has good effect in situation assessment,and thus provides a feasible solution for situation assessment.
Intrusion Detection System Based on Improved Naive Bayesian Algorithm
WANG Hui,CHEN Hong-yu and LIU Shu-fen
Computer Science. 2014, 41 (4): 111-115. 
Abstract PDF(565KB) ( 658 )   
References | RelatedCitation | Metrics
With increasing Internet connectivity and traffic volume,recent intrusion incidents have reemphasized the importance of network intrusion detection system (IDS).According to the deficiency of the Naive Bayesian (NB) algorithm,this paper proposed an improved NB algorithm.This algorithm based on the original model is combined with a parameter of classification control.It can simplify the complexity of the classification of data and optimize the classification accuracy by computed parameter values.The experimental results prove that the algorithm used in the intrusion detection framework can drastically reduce the false alarm rate of IDS,thereby improve the detection efficiency and decrease economic damage brought by the cyber attack.
Design and Analysis of Secure Anti-collision Search Protocol for RFID
CAO Zheng,YANG Lin and XIE Hui
Computer Science. 2014, 41 (4): 116-119. 
Abstract PDF(362KB) ( 445 )   
References | RelatedCitation | Metrics
According to the particularity of search protocol for RFID,its security requirements were extended in this paper.Then,on the basis of the extended security requirements,some security enhancements and improvements on the original SSP protocol were achieved,and a secure anti-collision search protocol for RFID called SSP+protocol was proposed.Subsequently,based on security analysis of SSP+protocol,it was proved that this improved protocol can not only provide general security requirements,such as privacy protection,anonymity,untraceability,but also eliminate security risk caused by collision,thereby meeting anti-collision that is the unique security requirement on search protocol.
User Behavior Analysis Model Based on Game Theory under Multi-clouds Environment
NIE Ting-ting and GUO Yu-cui
Computer Science. 2014, 41 (4): 120-125. 
Abstract PDF(491KB) ( 397 )   
References | RelatedCitation | Metrics
Aiming at the situation of lacking research on Distributed Denial of Service (DDoS) attack under multi-clouds environment,a user behavior analysis model based on game theory under multi-clouds environment was proposed from the perspective of cloud service provider.The model firstly constructs a payoff matrix based on game theory,and then it judges user behavior through fuzzy membership function,and evaluates the resource consumption and profit of cloud service provider under non-cooperation and cooperation scenarios.Simulations show that the proposed cooperation model can reduce resource consumption,lower the risk of DDoS attack for cloud service providers,and prompt the profit of each resource unit for more than three times,which takes an effective significance.
Multilevel Real-time Payload-based Intrusion Detection System Framework
LIU Jie-fang,ZHAO Bin and ZHOU Ning
Computer Science. 2014, 41 (4): 126-133. 
Abstract PDF(706KB) ( 486 )   
References | RelatedCitation | Metrics
Intrusion detection systems use a lot of features sets to identify intrusions,so they need to deal with the huge network traffic.However,most of the existing systems lack real-time anomaly detection capability.This paper presented multilevel real-time payload-based intrusion detection system.It first uses n-gram to analyse network packet payload and build feature model for data preparation,and then uses 3-Level Iterative Feature Selection Engine for feature subset selection.Principal component analysis in 3LIFSEng is used for data preprocessing,and combining the cumulative energy,parallel analysis and gravel test,the principal component selection is made.Mahalanobis distance map is used to discover the hidden dependencies between packets and between features.Mahalanobis distance criteria is used to distinguish normal or attack data packets.DARPA 99and GATECH datasets verify the system’s validity.Web application traffic verifies its mode.F-value assesses its detection performance. Experimental results show that compared with the present mainstream two intrusion detection system, the system improves the detection accuracy and reduces the false positive rate and the computational complexity.Additionally,it has 1.3time higher throughput in comparison with real scenario of medium sized enterprise network.
Security RFID Authentication Scheme in Supply Chains
YANG Chao,ZHANG Hong-qi and QING Meng-yu
Computer Science. 2014, 41 (4): 134-138. 
Abstract PDF(440KB) ( 369 )   
References | RelatedCitation | Metrics
Security RFID authentication scheme in supply chains should not only ensure the privacy and security of all the company,but also satisfy the requirement of supply chains management.Aiming at the problem that existing schemes can’t consider both sides at the same time,the paper proposed an authentication scheme based on “double signature”.The tag authentication message is signed by the access key and the authentication key.Only possessing both two keys can the company identify the tag.This scheme makes tags transfer safety inside the supply chains and makes the supply chains management convenient.Analysis shows that this scheme has clear advantage compared with existing schemes.
File Hiding Based on FAT Redirection
WANG Yu-long,LI Qing-bao,WANG Wei and NIU Xiao-peng
Computer Science. 2014, 41 (4): 139-144. 
Abstract PDF(531KB) ( 605 )   
References | RelatedCitation | Metrics
File hiding technology is a kind of significant data security protection method.The FAT32file system is a good file system for compatibility,and widely used in removable storage service based on flash memory.By analyzing the pattern of files regulation and the features of storage of FAT32,we proposed a new method,which realizes file hiding by erasing and transferring the directory entry of destination files to rewrite the FATs through redirection technology.Theoretical analysis and experimental results show that this method has the characteristics of large capacity,good concealment,strong robustness and excellent security,but low efficiency in hiding large files.
Integrated Reliability Modeling and Analysis of Hardware/Software of ERCS System Based on Copulas
GUO Rong-zuo
Computer Science. 2014, 41 (4): 145-149. 
Abstract PDF(452KB) ( 360 )   
References | RelatedCitation | Metrics
The reliability of a system employs an increasing important issue in modern day electronic,manufacturing and industrial systems,and most of the control systems are the core part of Embedded Real-time Control System(ERCS),and the system reliability is especially important.At first,formalization of the hardware/software of ERCS was defined.Then,reliability modeling was given for software modules which can not be subdivided and for IP hardcore.The reliabi-lity modeling of hardware/software of ERCS was also provided by applying Copula function.The integrated reliability modeling of software and hardware of ERCS was also established by applying Copula function.The reliability of specific system hardware/software was calculated by using the model.The results show that the reliability model established with Copula function takes account of the correlation between software modules,IP hardcore and hardware/software,therefore the integrated reliability of hardware/software of ERCS is improved compared with the independent hardware and software.
Efficient Attribute-based Authenticated Key Agreement Protocol
CHEN Yan-li,DU Ying-jie and YANG Geng
Computer Science. 2014, 41 (4): 150-154. 
Abstract PDF(476KB) ( 650 )   
References | RelatedCitation | Metrics
A novel ciphertext-policy attribute-based encryption scheme was proposed.Employing Linear Secret Sharing Schemes (LSSS),any access structure can be expressed.The decryption procedure needs only three bilinear maps resulting in more efficient computation irrespective of attributes set.The CP-ABE was proven to be selectively secure in the standard model under chosen plaintext attack.Based on the efficient scheme above,an efficient Attribute-based Authenticated Key Agreement Protocol (ABAKA) was proposed.Combined with NAXOS technique,the ABAKA can resist the leakage of the users’ key.The proof was given in the ABeCK model.Finally the paper gave the analysis and experiment result of the computation overhead.
Markov Model for Predicting Trust
ZHANG Feng,WANG Jian,ZHAO Yan-fei and DU He
Computer Science. 2014, 41 (4): 155-158. 
Abstract PDF(415KB) ( 620 )   
References | RelatedCitation | Metrics
Due to fuzzy and inaccuracy attribution of the trust evaluation,the trust calculation based on the fuzzy logic has gained more and more attention,but the existing trust models based on fuzzy logic are not very good to consider the effects of past transaction on trust evaluation,leading to insufficient accuracy of trust value computation.Markov chain was introduced to record the user the value of past trading.Combining transition probability and steady-state probability,MTP Algorithm based on Markov chains was proposed.Simulation results show that the proposed algorithm improves accuracy of trust assessment based on fuzzy logic and restrains the malicious nodes and swing nodes’ attack,possesses particular effectiveness to increasing swing nodes.
Research on DDoS Intrusion Detection System Based on Linux High Speed Packet Capturing Platform
LI Zhong-wen,WU Cheng-bin and XU Xiao-chen
Computer Science. 2014, 41 (4): 159-162. 
Abstract PDF(331KB) ( 487 )   
References | RelatedCitation | Metrics
It has always been a hot research aspect to achieve wire-speed packet capturing and the upper security applications in gigabit network environment.In previous work,we created the high-speed Gigabit Ethernet packet capture platform NACP by using memory mapping and other methods.On this basis,by using the distribution of IP addresses and the use of system resources as detection parameters,we achieved the anti-DDoS attacks intrusion detection system based on snort tool.The experiments on NACP show that improved DDoS intrusion detection tool Snort is compatible with the high-speed packet capturing platform,and the event of DDoS can be quickly detected and get appropriate response in NACP.Because of the use of high-speed packet capturing platform,the system resources occupied by DDoS detection are significantly reduced,so it greatly improves the system efficiency,and the system can handle other affairs during the intrusion detection.
Design and Implementation of Network Provenance System Based on Declarative Networking
GAO Xiang,WANG Xiao and WANG Min
Computer Science. 2014, 41 (4): 163-167. 
Abstract PDF(497KB) ( 370 )   
References | RelatedCitation | Metrics
Network forensic analysis and fault diagnosis are becoming increasingly important in network management and network security domain.This requires network management system has the ability to query network metadata.For instance,network provenance can be used in tracking the path of dataflow through the network to obtain the source of data.This paper presented the design and implementation of a network provenance system (NPS) framework.The framework is used to support the full range of functionality required for enabling forensics in distributed systems.We adopted the declarative networking technique in the networking domain to maintain and query distributed network provenance.The framework adopts a reference-based approach to transfer provenance information and a cyclic graph to represent the provenance information,implementing efficient network provenance in distributed network.Simulation experiments were conducted in simulated network.The experiment results indicate that our network provenance system can support provenance process in a large-scale distributed network and significantly reduce bandwidth cost compared to traditional approach.
Universally Composable Secure Authentication Protocol for Mobile Sensors Based on Physical Unclonable Function System
SONG Sheng-yu,ZHANG Zi-nan,WANG Ya-di and LI Jun-feng
Computer Science. 2014, 41 (4): 168-171. 
Abstract PDF(355KB) ( 409 )   
References | RelatedCitation | Metrics
Internet of Things consists of sensor subnet and transmission network.For sensor subnet,it usually collects information by sensor node which has bad ability of computing,communication and storage implemented by combining mobile nodes and static nodes.But for transmission network,it commonly provides strong service of computing,communication and storage by making use of the existing internet infrastructure.To assure security control mechanics when the mobile sensors move from one cluster to another,whenever,to take into account both the security and feasibility of practical application,this paper presented an authentication and key exchange protocol based on Physical Unclonable Function(PUF).This protocol can achieve bidirectional authentication and key negotiation between mobile sensor and transmission network base station when the mobile sensor roams to other cluster .Analysis shows that the proposed protocol is universally composable secure.
Chaotic Neural Network Model for Software Reliability
ZHANG Ke,ZHANG De-ping and WANG Shuai
Computer Science. 2014, 41 (4): 172-177. 
Abstract PDF(499KB) ( 350 )   
References | RelatedCitation | Metrics
A forecasting method based on Empirical Mode Decomposition (EMD),chaos analysis and neural network theory was presented to model and forecast software reliability forecasting.First,using EMD theory,the software fai-lure data time serial is decomposed into many intrinsic modal functions (IMF) which are an significantly represent potential information of original time serial,and the further analysis of IMF indicates whether software failure has a chaos feature.Then,by using chaos theory and neural network,the forecasting models are established to forecast the IMF respectively.By these means,the model can be improved to learn various objective functions and more precious prediction can be obtained.After comparing the results forecasted by means of combination of SVR and neural network,it is proved that the effect of the forecasting method of EMD&GEP in software reliability forecasting is better.
Inferring Algorithm for a Subclass of Restricted Regular Expressions
FENG Xiao-qiang,ZHENG Li-xiao and CHEN Hai-ming
Computer Science. 2014, 41 (4): 178-183. 
Abstract PDF(468KB) ( 554 )   
References | RelatedCitation | Metrics
The problem of inferring XML schemas reduces to inferring deterministic regular expressions from a set of sentences.A subclass of restricted regular expressions which commonly occur in practical XML schemas was proposed.An algorithm for inferring this kind of regular expressions was described.The algorithm first constructs the corresponding automata according to the sentence set,then infers the regular expression from the automata and the sentence set.The complexity of the algorithm is max(O(|V|+|E|),O(L)) where V and E are the set of states and the set of edges of the constructed automata respectively,and L is the total length of sentences.The termination and correctness of the algorithm were proved.
DTL-Real-Time Object-Z Specification Language and Implementation on Obligation Authorization Model
MA Li,ZHONG Yong and HUO Ying-yu
Computer Science. 2014, 41 (4): 184-189. 
Abstract PDF(454KB) ( 381 )   
References | RelatedCitation | Metrics
Due to the absence of abilities to describe temporal factors entirely in Object-Z,such as describing the execution of operations at some specific time or in some cycles,and the absence of concept of operation compensation,the paper presented DTL-Real-Time Object-Z:an integration of real-time Object-Z with the Distributed Temporal Logic.The language can express the event-driven,time-driven and compensation factors of operations effectively.The syntax and semantics of the language were analyzed and discussed.Finally,expressiveness and application of the language were showed by formal description of an obligation authorization model.
Research of Performance Evaluation of Cloud Storage
ZHOU Xiao-peng,ZHANG Xiao-fang and ZHAO Xiao-nan
Computer Science. 2014, 41 (4): 190-194. 
Abstract PDF(425KB) ( 581 )   
References | RelatedCitation | Metrics
After the coming of Big Data Time,the demands for cloud storage are becoming stronger.Performance evaluation of cloud storage is always the key and difficult point of research of cloud storage system.Through analyzing the data access process of typical cloud storage platforms,the data access model of cloud storage was built.With the help of iptools capturing IP packages,the time cost of each phase of processing users’ access was gained.The true performance can be observed when ignoring the network interference.The test results show that the model is scientific and the test method can gain the true performance of cloud storage system effectively.
Storage Caching Sub-system for Cloud Application
CAI Tao,NIU De-jiao,ZHANG Yong-chun,NI Xiao-rong and ZHOU Dong-ming
Computer Science. 2014, 41 (4): 195-199. 
Abstract PDF(523KB) ( 718 )   
References | RelatedCitation | Metrics
Performance is very important for the cloud application.There are some differences between cloud and current application in management strategy and access pattern,which leads to the traditional cache management strategy can’t adapt to the requirement of the cloud application.According to the features of cloud application,the cache strategy based on impact factors was designed to implement the storage cache sub-system for cloud application.Metadata and data cache were used to manage separately,making a distinction about the impact of the creation,opening,reading,writing and modifying on the probability of the cache accessed again.Then the two cache correlation strategics were designed to improve the performance of cache management according to the relation between metadata and data.The actively sche-duling strategy was used to improve the adaptability based on eliminating the low impact factor cache entries actively and adjusting the space of metadata cache and data cache dynamically.At last,the prototype was released and was evaluated by Filenench and Postmark.The result shows the storage cache sub-system can improve 1%~120% I/O performance and 2%~87% operation processing speed.
XML Query Expansion Based on High Quality Expansion Source and Local Word Co-occurrence Model
ZHONG Min-juan,WAN Chang-xuan,LIU De-xi,LIAO Shu-mei and JIAO Xian-pei
Computer Science. 2014, 41 (4): 200-204. 
Abstract PDF(0KB) ( 211 )   
References | RelatedCitation | Metrics
The two problems should be solved in query expansion.One is the origin of the expanded terms and the other is to select appropriate expanded terms from the expansion source.Therefore,this paper proposed query expansion method,in which the high quality relevant documents set is firstly obtained based on xml search results clustering and ranking model and it is regarded as the expansion source,and then the local word co-occurrence model combing xml documents structure features is applied to select the expanded query.The experimental data have proved two sides.On the one hand,the proposed expansion source acquisition method has obtained more relevant documents and the source has higher quality than those of traditional pseudo relevant feedback.On the other hand,compared to original query and no structure method,the selected expanded terms based on local word co-occurrence with XML structural features are more relevant to user’s query intension and lead to good performance in retrieval.
Computational Complexity Analysis of Backtrack-free and Random-walk Strategies on Constraint Satisfaction Problems with Growing Domains
XU Wei and GONG Fu-zhou
Computer Science. 2014, 41 (4): 205-210. 
Abstract PDF(485KB) ( 436 )   
References | RelatedCitation | Metrics
Constraint satisfaction problems (CSP) with growing domains are a kind of important practical models in complexity theory,but the algorithm performance research is still rare.By the study on a typical CSP with growing domains,RB model,it is discovered that the backtrack-free strategy works better than the random-walk strategy when the size of problems is huge.Different from CSP with fixed domains such as SAT,this is a special property to CSP with growing domain.Moreover,the performances in experiments and theoretic analysis for these two strategies were also given.
Research and Application of Similarity Based on Search Engine
LIU Sheng-jiu, LI Tian-rui, JIA Zhen and JING Yun-ge
Computer Science. 2014, 41 (4): 211-214. 
Abstract   
References | RelatedCitation | Metrics
Search engine is one of the most important Internet applications for modern society and brings on more and more attention in the field of scientific research than in commercial area.Aiming at limitations of method of similarity calculation that can’t reflect the relationship between objects globally nowadays,we investigated a new method for similarity calculation based on search engines.With the idea of set theory and the introduction of market shares of search engines as well as a series of simplified mathematical methods,the similarity was obtained based on the merging results of search engines.Experimental results confirm the feasibility and effectiveness of the proposed method for similarity calculation.
Community Development Method Based on Interactive Similarity
ZHANG Xing,YU Zhi-wen,LIANG Yun-ji and GUO Bin
Computer Science. 2014, 41 (4): 215-218. 
Abstract PDF(461KB) ( 509 )   
References | RelatedCitation | Metrics
It was found that online social network community structure contributes to in-depth research of information propagation,social recommendation and the application of group identity discovery.Existing community structure excavation method ignores the many social attributes among users,which makes it difficult that the obtained community structure reflects the fine-grained structure.We combined user’s social attributes into community structures excavated algorithm,then proposed the user interaction model in order to measure the user’s social interaction properties.We proposed a community development method based on interactive similarity.The algorithm can effectively measure social attributes between users,get different size groups through hierarchical clustering,and filter noise data.In order to verify the effectiveness of the algorithm,we collected user interaction recorded from social networking site as data sets,compared the performance differences with other community mining algorithm.The experimental results show that this method discovers fine-grained communities with high accuracy,and may be used to discover different topic between communities.
Research and Application of Data Mining and Dynamic Neural Networks in Load Forecasting
LI Xiao-feng,HUANG Guo-xing,GAO Wei-wei and DING Shu-chun
Computer Science. 2014, 41 (4): 219-222. 
Abstract PDF(327KB) ( 363 )   
References | RelatedCitation | Metrics
Dependence of medium and long-term load variation on socio-economic indicators is difficult to express by an accurate mathematical model. This paper applied data mining techniques to the association analysis of the total electricity consumption growth.Multiple indicators were selected from the socio-economic indicators since 2000to compose relevant factors database,and the missing data were completed.Several indicators closely related to total electricity consumption were mined using cluster analysis,and the distortion data were corrected,thus a more scientific load forecasting model was constructed. Through time series of dynamic neural network,the load forecasting model was tested and validated.The results show that the prediction model has good convergence and satisfactory effect.
Research on CBR’s Case Representation and Similarity Measure Based on ALCQ(D)
SUN Jin-yong,GU Tian-long,CHANG Liang and MA Lin-wei
Computer Science. 2014, 41 (4): 223-229. 
Abstract PDF(613KB) ( 476 )   
References | RelatedCitation | Metrics
Focused on the lack of qualified number restrictions and concrete domains restrictions in DLs such as EL,ALC,ALCNR that have been used in CBR’s case representation,ALCQ(D) was used with which qualified number restrictions and concrete domains constructor were equipped.First,ALCQ(D) concepts were used to represent and index cases with the requirements of qualified number restrictions,concrete data types and numerical restrictions.Two concrete domain types which are numerical data type and symbolic data type were studied.Second,the normal form of ALCQ(D) was defined to normalize case representations in the form of indexes.Finally the measure method for case similarity was presented,which measures similarities of all parts of the case representations,then weights and summates gained similarities.Experimental results show that ALCQ(D) represents cases more accurately and the measure method for case similarity measures the similarity between cases more adequately.It is very important for increasing the speed of case retrieval,for improving the accuracy of case retrieval,and for improving the efficiency of the CBR system.
Linear Programming Support Vector Regression Method Based on One-class Classification
SUN De-shan,ZHAO Jun,GAO Cai-kui,ZHENG Ping and LIU Xiao-fei
Computer Science. 2014, 41 (4): 230-232. 
Abstract PDF(316KB) ( 367 )   
References | RelatedCitation | Metrics
A new support vector regression algorithm based on linear programming was proposed according to one-class classification,which can more uncover the relation between one-class classification and regression.The tests were perform on sine function,chaos time series and real world data sets.Experiments show that the new method has comparable or better generalization performance than ε-insensitive Support Vector Regression (ε-SVR),Linear Programming Support Vector Regression (LP-SVR) and Least squares support vector regression (LS-SVR),and also show that the proposed method is feasible and valid.
ELPS:An Efficient Information Trajectory Extracting Algorithm in Microblog
WANG Yue and HUANG Wei-jing
Computer Science. 2014, 41 (4): 233-238. 
Abstract PDF(2213KB) ( 422 )   
References | RelatedCitation | Metrics
With the development of the Social Networking Services (SNS),SNS has become an important tool for people to communicate with each other.The rich user generated contents (UCG) of SNS have contained useful knowledge about information propagation rules.Thus SNS can be used to study public opinion and information propagation rules in the social networks.As information propagation happens in the online social networks discretely and sparsely,it is hard to directly observe and study the propagation process of information in an online social network with over 10million nodes.To meet the challenges,the paper (1) provided a model “info-trajectory” to capture the information propagation pathways in the online social network,(2) proposed several algorithms to extract info-trajectory from some practical microblog social networks efficiently by employing repost timeline (a kind of public available repost notification data of microblog),(3) studied the temporal relations of the repost actions for the users on the obtained info-trajectories, (4) proposed algorithm K-advocators to discover the advocators from the information propagation trajectories and information propagation patterns in the microblog,and (5) in the experiment section,provided the sufficient experiments to study info-trajectories for several topics on Sinamicroblog(a prevalent microblog application in China),and several po-pular SNSs.The results show that the proposed methods are efficient to extract the info-trajectories,and useful to disco-ver advocators for specific topics in the microblogs.
Research of Mining Meta-association Rules for Dynamic Association Rule Based on Gray-periodic Extensional Model
ZHANG Zhong-lin,SHI Hao-yin and SONG Hang
Computer Science. 2014, 41 (4): 239-243. 
Abstract PDF(422KB) ( 343 )   
References | RelatedCitation | Metrics
A method applying Gray-Periodic extensional combinatorial model to Meta-rule mining for dynamic association rules to improve the prediction accuracy was proposed.In this method,firstly a GM(1,1) model is established by counting the support of the Meta-rule for dynamic association rules and then the residuals sequence generated during modeling is integrated to build the periodic extensional model and extract the optimal cycle.Secondly,the periodic extensional model is used as residual compensation for the GM(1,1) model.So the final prediction model is presented when superimposing the GM(1,1) model and results of the periodic extensional model together.At last,the results show that the method has higher prediction accuracy.
FO-CA:A Multiple Attribute Data Classification Method Based on Distance Difference Degree Combination Weighting
GONG An,GAO Hai-kang,XU Jia-fang and MA Xing-min
Computer Science. 2014, 41 (4): 244-247. 
Abstract PDF(339KB) ( 337 )   
References | RelatedCitation | Metrics
In order to solve the problem of multi-attribute data classification,we proposed a classification method based on fuzzy optimization and clustering analysis (FO-CA).First,we used fuzzy optimization model to get one dimensional composite indicator data set.Meanwhile,according to the distance difference degree,we established a combination weighting method to integrate subjective weights and objective weights in weighting stage.Second,we used hierarchical cluster analysis method to divide the composite indicator data set into several clusters,and then classified the clusters.Finally,we selected Iris,Wine and Ruspini datasets from UCI Machine Learning Repository for simulation experiments.The experiment results show that the proposed method achieves better results than fuzzy optimization method and K-Means algorithm,and provides an effective approach for data classification.
Discernibility Function-based Algorithm for Finding All Maximal Cliques
HUANG Zhi-guo and LI Na
Computer Science. 2014, 41 (4): 248-251. 
Abstract PDF(328KB) ( 351 )   
References | RelatedCitation | Metrics
Finding maximal clique of a graph is a fundamental problem in graph theory.By combining discernibility function with maximal clique,the discernibility function of maximal clique over the vertex and the Boolean mapping function of the vertex with respect to any vertex set were defined,and some characteristics and theorems relative to maximal clique were obtained,then a fact was proved that the problem of finding all maximal cliques in a graph can be naturally expressed as relatively simple constraints of discernibility function expression.Furthermore,discernibility function-based algorithm was designed for finding all maximal cliques in a graph by introducing the method of reduction tree constructing.The theoretical derivation and simulation instance show that this algorithm is feasible and effective in practice.
Deployment Strategies Research on Cloud Computing under Bursty Workloads on Neural Network
CHEN Peng,MA Zi-tang,SUN Lei and SUN Dong-dong
Computer Science. 2014, 41 (4): 252-255. 
Abstract PDF(389KB) ( 378 )   
References | RelatedCitation | Metrics
Aiming at the degrading system performance that bursty workloads bring in cloud computing,a resource deployment model based on error back-propagation neural network was proposed to resolve the problems referred to above.A network module is started automatically when the beginning of bursty workloads is judeged.The prediction of parameter adjustment value is carried out by using pre-trained network to achieve the purpose of tracking dynamically the changing of underlying resource and outside world task in cloud computing system.The results of simulation in CloudSim prove that the response speed of resource deployment can be improved efficiently by bringing neural network module.
Research on Double Weight Parameter Anti-collision Q Value Algorithm in RFID System
REN Shou-gang,YANG Fan and XU Huan-liang
Computer Science. 2014, 41 (4): 256-259. 
Abstract PDF(358KB) ( 535 )   
References | RelatedCitation | Metrics
One of hot research in RFID technology is Tag anti-collision algorithm,which is the key to high strength and large-scale application.Firstly,this paper analyzed the characteristics of the Q value adjustment algorithm recommended in the EPC-C1G2standard anti-collision mechanism.In order to solve the Q values adjusting algorithm problem,a new Q value of double parameter adjusting algorithm,called ODWQA algorithm,was proposed.The idea of algorithm,the process of operation and the key parameter determination method were introduced.In this algorithm,by splitting the single parameter c into two weight parameter c1 and c2 to deal with the case of collision and idle respectively so that the number of collision slots and idles slots can be controled.Followed by laboratory analysis,the value of the best weight parameters under different Q values was determined.Simulation results show that the proposed algorithm can not only decrease the number of collision slots but increase the system throughput and fewer time slots than previous work.
Vague Spatial Decision Method and its Application for Location of Tailings Dam
ZHANG Kun,WANG Hong-xu,WANG Hai-feng and LI Zhuang
Computer Science. 2014, 41 (4): 260-262. 
Abstract PDF(1191KB) ( 319 )   
References | RelatedCitation | Metrics
Based on fuzzy sets and vague sets theory,we proposed novel research methods and calculation formulas and applied them to spatial decision problem—the location of tailings dam.We transformed from the raw data to vague data,and got the vague sets of alternatives and the optimal solution.We used the provided methods and formulas to calculate similarity measures of vague sets for vague spatial decision.At last we got optimal decision.Experimental results show that the proposed methods and formulas can provide a useful way for the application examples.
N-gram Chinese Characters Counting for Huge Text Corpora
YU Yi-Jiao and LIU Qin
Computer Science. 2014, 41 (4): 263-268. 
Abstract PDF(566KB) ( 1336 )   
References | RelatedCitation | Metrics
Counting N-gram Chinese characters of huge text corpora is a challenge for Chinese information processing and Cici was developed to count huge Chinese text corpora efficiently.We found that the number of different Chinese strings is maximal when the length of strings is 6,and the number of strings can be estimated by the average length of sentences.Since most Chinese strings appear no more than 10times in the corpora,the N-gram characters are stored in 13separate files according to their frequency,and only highly used strings are sorted.This strategy speeds up the accounting process dramatically.Due to the limited physical memory,huge Chinese text corpora have to be divided into many blocks,whose size is suggested to be 20MB.Every block is counted separately,and then the block statistic results are merged together.We implemented the algorithm of accounting huge corpora efficiently in personal computer.
New Clustering Algorithm Bases on Hadoop
MIAO Yu-qing,ZHANG Jin-xing,LIU Shao-bing,WEN Yi-min and MING Mei
Computer Science. 2014, 41 (4): 269-272. 
Abstract PDF(373KB) ( 413 )   
References | RelatedCitation | Metrics
Hadoop is a popular platform to handle huge datasets.But many clustering algorithms can not run effectively over it,for it lacks built-in support for iterative programs,which arises naturally in many clustering applications.We proposed bigClustering which can be easily parallelized in Hadoop MapReduce and done in quite a few MapReduce rounds.Our algorithm is based on the ideas of micro-cluster and equivalence relation.It divides a dataset into many groups and constructs one micro-cluster,which will be treated as a single point,corresponding to each group.All micro-clusters that are closed enough will be connected and put into the same group by the equivalence relation.The center of each group will be calculated and will be the center of a real cluster in the dataset.Experiments show that bigKClustering not only runs fast and obtains high clustering quality but also scales well.
New Supervised Manifold Learning Method Based on MMC
YUAN Min,YANG Rui-guo,YUAN Yuan and LEI Ying-ke
Computer Science. 2014, 41 (4): 273-279. 
Abstract PDF(739KB) ( 459 )   
References | RelatedCitation | Metrics
Based on the analysis of local spline embedding (LSE) method,we proposed an efficient feature extraction algorithm called orthogonal local spline discriminant projection (O-LSDP).By introducing an explicit linear mapping,constructing different translation and rescaling models for different classes as well as orthogonalizing feature subspace,O-LSDP can effectively circumvent the two major shortcomings of the original LSE algorithm,i.e.,out-of-sample and unsupervised learning.O-LSDP not only inherits the advantages of LSE which uses local tangent space as a representation of the local geometry so as to preserve the local structure,but also makes full use of class information and orthogonal subspace to significantly improve discriminant power.Extensive experiments on standard face databases and plant leaf data set verify the feasibility and effectiveness of the proposed algorithm.
Graph-based Semi-supervised Dimensionality Reduction Algorithm
YANG Ge-lan,JIN Hui-xia,MENG Ling-zhong and ZHU Xing-hui
Computer Science. 2014, 41 (4): 280-282. 
Abstract PDF(1060KB) ( 589 )   
References | RelatedCitation | Metrics
Nonlinear dimensionality reduction and semi-supervised learning are both hot issues in machine learning area.Based on semi-supervised method,the article solved nonlinear dimensionality reduction problem to make up for the shortfall of ordinary methods.By using integration of equalities,a novel expression of label propagation algorithm was proposed.We used the label propagation result as the initial value mapping,and then found the best approximation to it in the graph spectral space.The experiment shows that our semi-supervised dimensionality reduction method can achieve smooth data mapping that is closer to the ideal effect.
Modified Function Projective Lag Synchronization of Chaotic Systems Subject to Unknown Disturbance
CHAI Xiu-li,WANG Yu-jing,YUAN Guang-yao and SHI Chun-xiao
Computer Science. 2014, 41 (4): 283-286. 
Abstract PDF(384KB) ( 317 )   
References | RelatedCitation | Metrics
In the paper,modified function projective lag synchronization (MFPLS) of a class of chaotic systems of which the drive system and response system are all subjected to unknown external disturbance was investigated,and a general theorem of MFPLS was introduced.Based on Lyapunov stability theorem and adaptive control method,two different res-ponse systems were given,and adaptive controller and disturbance controlling strength update law were constituted.MFPLS of chaotic system was achieved under the condition of unknown system uncertain disturbances,and disturbance controlling strength could also be estimated automatically.Finally,numerical simulation results of a hyperchaotic system verify the validity,effectiveness and robustness of our theoretical results.The designed controller is simple and practical,and has broad application in secure communication and other fields.
Automatic Location of Feature Points on Three-dimensional Facial Model Based on Depth Image
LI Kang,SHANG Peng and GENG Guo-hua
Computer Science. 2014, 41 (4): 287-291. 
Abstract PDF(1317KB) ( 380 )   
References | RelatedCitation | Metrics
Accurate location of feature points on the three-dimensional facial model is one of the key issues in the craniofacial morphological research.For manual intervention in the process of facial feature points,the paper proposed the method based on the depth image of 3D facial model to locate the feature points.First of all,it generates the two-dimensional depth image of three-dimensional facial model,and then uses SUSAN operator,gray-level integral projection method to locate the feature points on this image,and ultimately mas the located feature points to the facial model,thereby achieves location of thirteen points including nose tip point,mouth corner points,eye corner point and ears points.Experimental results show that the method can automatically locate feature points of the facial model,and accurately obtain the position of the feature points,and effectively solve the problem caused by human involvement of the facial feature point location.
Hierarchically Extracting Feature Points of 3D Deformable Shapes
PAN Xiang,ZHANG Guo-dong and CHEN Qi-hua
Computer Science. 2014, 41 (4): 292-296. 
Abstract PDF(1324KB) ( 410 )   
References | RelatedCitation | Metrics
This paper addressed the problem about the consistency of feature points,with different posed of 3D deformable shapes.It proposed a new algorithm to hierarchically detect feature points,based on the learning samples.Firstly,the algorithm detects the external feature points from input 3D shapes.Secondly,according to the local similarity of external points under different postures,semantic tags of external point are recognized by heat kernel signature and support vector machine.Finally,other feature points are hierarchically extracted by combining semantic tags and the geodesic distance of external points.In experiment,the proposed algorithm is proven to be very robust in detecting semantic-aware feature points on deformable shapes.
Incremental Kernel Discriminant Analysis Method via QR Decomposition
WANG Wan-liang,CHEN Yu,QIU Hong and ZHENG Jian-wei
Computer Science. 2014, 41 (4): 297-301. 
Abstract PDF(436KB) ( 435 )   
References | RelatedCitation | Metrics
To improve the online learning efficiency of nonlinear system,an incremental kernel decomposition analysis method via QR decomposition was developed.This algorithm maps the kernel space to a lower dimension space before performing feature decomposition to reduce the amount of calculation and the storage space of the kernel matrix.Then this algorithm solves the problem of redundant computation by combining the idea of incremental computation.The experiments on TE process and ORL data set show this algorithm is effective on feature decomposition as well as more efficient than the batch one.
Moving Object Detection Algorithm Using SILTP Texture Information
YANG Guo-liang,ZHOU Dan and ZHANG Jin-hui
Computer Science. 2014, 41 (4): 302-305. 
Abstract PDF(1127KB) ( 801 )   
References | RelatedCitation | Metrics
Accurate detection of the moving object is the pre-step of many video analysis technology.This paper put forward a moving object detection algorithm based on background subtraction,which transforms texture feature using a scale variant local ternary pattern operation(SILTP),and initializes the background model by using the composed texture value directly for the first frame of video sequences,rather than computing the distribution,finally updates the background model combining randomly substitute strategy with space information of the pixels.The testing results on the wallflower dataset show that this algorithm has better detection results compared with the other ones,not only sa-tisfies for real-time,but also has a strong robustness in shadow suppression and illumination variation.
Fast Corner Detector Based on Chord-to-Point Distance Accumulation
JIN Yi-ting,WANG Wan-liang,ZHAO Yan-wei and JIANG Yi-bo
Computer Science. 2014, 41 (4): 306-308. 
Abstract PDF(406KB) ( 665 )   
References | RelatedCitation | Metrics
To solve the disadvantages of the corner detector based on chord-to-point distance accumulation,low location accuracy,high detector time complexity and so on,a fast robust corner detector was proposed.This detector combines the advantages of ‘single scale’ and ‘multi-scale’.Firstly,it obtains the candidate corner under single scale,keeping the adjacent corner,and reducing following calculation greatly.Then,it calculates the eigenvalues of the candidate corners under multi-scale,eliminates false corners effectively.The experiment result shows that the detector reduces time complexity,keeps the robustness of the detector and the accuracy of corner location is also improved.
Face Recognition Method Based on Low-rank Recovery Sparse Representation Classifier
DU Hai-shun,ZHANG Xu-dong,HOU Yan-dong and JIN Yong
Computer Science. 2014, 41 (4): 309-313. 
Abstract PDF(1235KB) ( 362 )   
References | RelatedCitation | Metrics
A face recognition method based on low-rank recovery sparse representation classifier (LRR_SRC) was proposed to overcome the disadvantages of the face recognition of sparse representation-based classification (SRC),including the poor performance of the unit matrix as the error dictionary in the progress of describing the noise and error of the face images,and the dictionary incompletion caused by the insufficiency of the training samples.Firstly,in this method,training samples are decomposed into a low rank approximation matrix and a sparse error matrix using low-rank recovery (LRR) algorithm.And then,the low-rank approximation matrix and the error matrix compose a dictionary.On the basis of this,the sparse representation of the given test sample can be obtained under this dictionary.Further,using the sparse coefficients associated with the special class,LRR_SRC can approximate the given test sample and calculate the reconstruction error between the given test sample with its approximation associated with the special class.Based on the reconstruction error associated with special class,the given test sample can be classified accurately.Experimental results on face database of YaleB and CMU PIE show that face recognition method proposed in this paper has a higher recognition rate.
Image Reconstruction Algorithm Based on Graph Cuts and Gradient-based Algorithms
CHENG Li-jun,ZHANG Yu-bo and XU Cong-fu
Computer Science. 2014, 41 (4): 314-318. 
Abstract PDF(1828KB) ( 329 )   
References | RelatedCitation | Metrics
Image reconstruction is a promising optical molecular imaging technique on the frontier of biomedical optics.In this paper,a generalized hybrid algorithm for image reconstruction was proposed based on graph cut algorithm and gradient-based algorithms.The graph cut algorithm is adopted to estimate a reliable source support without prior knowledge,and different gradient-based algorithms are sequentially used to acquire an accurate and fine source distribution according to the reconstruction status.Furthermore,multilevel meshes for the internal sources are used to speed up the computation and improve the accuracy of reconstruction.Numerical simulations were performed to validate this proposed algorithm and demonstrate its high performance in the multi-source situation even if the detection noises,optical property errors and phantom structure errors are involved in the forward imaging.