Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 43 Issue 7, 01 December 2018
  
Survey of Management of Crosscutting Concerns
HE Cheng-wan
Computer Science. 2016, 43 (7): 7-12.  doi:10.11896/j.issn.1002-137X.2016.07.001
Abstract PDF(605KB) ( 480 )   
References | Related Articles | Metrics
The key difference between AOSD (Aspect-Oriented Software Development) and OOSD (Object-Oriented Software Development) is the management of crosscutting concerns.The management of crosscutting concerns should run through the whole process in AOSD.Firstly,the definition and manifestations of crosscutting concerns at different stages of software development were summarized.Then,the most recent research advances of several key issues such as identification of crosscutting concerns in the requirements phase,behavioral constraints,and evolutionary mechanisms in the management of crosscutting concerns were surveyed.Finally,the future research directions were outlined,and the possible solutions to those key issues were proposed.
Review of Malware Detection Based on Data Mining
HUANG Hai-xin, ZHANG Lu and DENG Li
Computer Science. 2016, 43 (7): 13-18.  doi:10.11896/j.issn.1002-137X.2016.07.002
Abstract PDF(658KB) ( 949 )   
References | Related Articles | Metrics
Data mining is a method for automatically discovering data rule based on statistics which can analyze huge amounts of sample statistics to establish discriminative model,so that an attacker can not master the law to avoid detection.It has attracted widespread interests and has developed rapidly in recent years.In this paper,the research on malware detection based on data mining was summarized.The research results on feature extraction,feature selection,classification model and its performance evaluation methods were analyzed and compared in detail.At last,the challenges and prospect were provided in the field.
Survey of Cloud Computing Security Based on Infrastructure
CHENG Hong-bing, ZHAO Zi-xing and YE Chang-he
Computer Science. 2016, 43 (7): 19-27.  doi:10.11896/j.issn.1002-137X.2016.07.003
Abstract PDF(850KB) ( 582 )   
References | Related Articles | Metrics
Cloud computing is leading an information technology revolution with its advantages such as efficient,reliable,and low-cost.However,the security issue is always the obstruction which limits the development and popularization of cloud computing.Therefore,it is undisputed that the security research is a hot issue in the field of cloud computing.In this paper,we divided the cloud computing into resource layer,resource abstraction layer and service layer,and defined data security,virtual machine security,multi-tenant isolation,application deployment,data processing,identity control and audition.The paper reviewed the recent progresses in this area based on architecture division of cloud computing and provided references for further research in cloud computing.
Cloud Computing Architecture of Chinese Character Culture Digitization System
YANG Yi, ZHANG Gui-gang, WANG Jian, HUANG Wei-xing and SU Hai-xia
Computer Science. 2016, 43 (7): 28-34.  doi:10.11896/j.issn.1002-137X.2016.07.004
Abstract PDF(854KB) ( 442 )   
References | Related Articles | Metrics
Chinese Character is a core element of Chinese civilization,which plays an important role in Chinese culture and history.Computer technology provides essential methods for Chinese character digitization.Chinese character culture digitization system (CCCDS) was developed for digitizing not only Chinese character but also Chinese culture around the characters.To deal with the rapidly increasing digitized Chinese character culture information,cloud computing and big data techniques were introduced as important means for data store,data management,and data analytics.The system provides an interactive user experience platform whose architecture is of scalable/scalability,high availability,and security.The system possesses real-time/history data analysis modules for the big data analysis in order to satisfy the requirements of applications and services based on Chinese character culture.The architecture of the system was validated by experiments.
Graded Belief-Desire-Intention (BDI) Models for Agent Architectures
ZHANG Xiao-jun, LIN Ying and ZHOU Chang-le
Computer Science. 2016, 43 (7): 35-40.  doi:10.11896/j.issn.1002-137X.2016.07.005
Abstract PDF(512KB) ( 521 )   
References | Related Articles | Metrics
The Belief-Desire-Intention (BDI) model is one of the most influential theories with respect to agent techno-logy.On the basis of blending the infinite-valued ukasiewicz logic and propositional dynamic logic to formulize this model,the authors proposed a GBDIPDL+LL logic in this paper.In order to represent the uncertainty behavior as a probability,necessity and possibility,the corresponding axioms were added to the ukasiewicz logic.The GBDIPDL+LLagent model in this paper explicitly represents the uncertainty of beliefs,desires and intensions by using multi-context systems,and is general enough to specify different types of agents.The GBDIPDL+LLagent’s behavior is determined by different measures of each context which is added by concrete conditions.This paper is to look for a possible axiomatic modeling of beliefs,desires and intensions,and to show how they influence the agent’s behavior.This model can also easily be extended to the people who have other mental attitudes.After presenting the language and semantics for this model,we proposed axioms and rules for the GBDIPDL+LLlogic and proved soundness and completeness.On the basis of dealing with composite action,we illustrated the relationship between/among contexts for the model.It is hoped that the present study will make contributions to uncertain representation and reasoning as well as providing a formal support for distributed artificial intelligence.
Multi-label Image Annotation Based on Convolutional Neural Network
LI Jian-cheng, YUAN Chun and SONG You
Computer Science. 2016, 43 (7): 41-45.  doi:10.11896/j.issn.1002-137X.2016.07.006
Abstract PDF(704KB) ( 725 )   
References | Related Articles | Metrics
In today’s life,the image resource is almost ubiquitous.An ocean of images make people overwhelmed.How to query,retrieve and organize these image information quickly and effectively is an urgent hot issue.The automatic ima-ge annotation is the key of text-based image retrieval solutional.A multi-label image annotation system based on a well-known deep learning model,convolutional neural network,was proposed in this paper,together with a multi-label loss ranking function to complete,the training and testing of multi-label image dataset.In the experiments,firstly,CIFAR-10 dataset were selected to test the effectiveness of the algorithm,and then quantitative test comparsion was conducted on multi-label image dataset Corel 5k.The proposed solution shows superior performance over the conventional algorithm.
Compressed Domain Synopsis Research in AVS Surveillance Profile
ZHAO Lei and HUANG Hua
Computer Science. 2016, 43 (7): 46-50.  doi:10.11896/j.issn.1002-137X.2016.07.007
Abstract PDF(1208KB) ( 487 )   
References | Related Articles | Metrics
The traditional methods of video synopsis need completely decoding video in pixel domain which costs too much time.Without decoding,a video synopsis algorithm was proposed for AVS surveillance video in compressed domain.To do so,this algorithm first analyzes the motion vector in AVS bit stream to extract foreground motion macroblocks.Then the valid moving object trajectories are obatained through tracking the foreground macroblocks.At last,the trajectories are recombined with the background frame,which is extracted from the source surveillance video,to gene-rate video synopsis.Comparing with the algorithm in pixel domain,experiments prove that this algorithm achieves similar effect but faster processing speed.
Association Rules Mining Based Cross-network Knowledge Association and Collaborative Applications
HUANG Xiao-wen, YAN Ming, SANG Ji-tao and XU Chang-sheng
Computer Science. 2016, 43 (7): 51-56.  doi:10.11896/j.issn.1002-137X.2016.07.008
Abstract PDF(536KB) ( 763 )   
References | Related Articles | Metrics
Nowadays,with the rise of social media,various and disparate social media services spring up like mushrooms.As a result,the social media variety phenomenon has become more and more pervasive.In this paper,we proposed a novel association rule-based method to investigate into this social media variety phenomenon,which aims to mine the cross-network knowledge association by leveraging the collective intelligence of plenty of cross-network overlapped users.A cold-start video recommendation application was further designed based on the derived cross-network knowledge association.Three stages are mainly involved in the framework:(1)heterogeneous topic modeling,where YouTube videos and Twitter users are modeled in topic level;(2)association rule-based knowledge association,where overlapped users serve as bridge between different social media networks and a novel association rule-based method is used to derive the topic correlation between different networks;(3)cold-start video recommendation,where the Twitter users and YouTube videos are transferred to the same topic space and matched on topic level.The experiments on a real-world dataset demonstrate the effectiveness of the proposed association method,which is able to capture some more flexible knowledge association beyond the semantic association.Moreover,the performance of the cold-start video re-commendation application is also very promising.
Analysis on Impact of Context on Mobile Video Push Notification
ZENG Cheng-lin, WANG Zhi, ZHANG Jin and LAM Ringo
Computer Science. 2016, 43 (7): 57-61.  doi:10.11896/j.issn.1002-137X.2016.07.009
Abstract PDF(744KB) ( 481 )   
References | Related Articles | Metrics
In mobile video content push notification,the acceptance of pushed content is affected by not only the contents (e.g.whethere the user is interested in the contents),but also the contextual factors,including when and where the user receives the pushed contents,and which type of device the user uses to receive it.It is surprising that there is little effort devoted to studying the impact of context on mobile video push notification.This paper used a data-driven approach to study the impact of time,location and device on push acceptance.Insights include:(1)the impact of time is periodical and the peak of time varies;(2)the location is an important factor that determines whether users accept the pushed mobile video;(3)devices and types of mobile OS also affect the pushed mobilevideo.Based on these studies,mobile video push notification can be improved,so that users can get better mobile video push notification service in their preferred contexts.
Video Topic Evolution Analysis Based on Clustering
XIE Yu-xiang, LUAN Xi-dao, GUO Yan-ming, LI Chen and NIU Xiao
Computer Science. 2016, 43 (7): 62-66.  doi:10.11896/j.issn.1002-137X.2016.07.010
Abstract PDF(449KB) ( 462 )   
References | Related Articles | Metrics
Video topic evolution analysis is contributive to the discovery of valuable pattern from massive video data.In this paper,a video topic evolution analysis method based on clustering was proposed.Firstly,the paper discussed how to analyze the visual similarity between video key frames based on bipartite graphs.Secondly,we proposed a clustering method by applying link analysis to the clustering of video topics so that the relationship between the same video topics and the otherness among different video topics could be enhanced.Thirdly,we revealed the evolution procedure of video topics.Finally,some experiments were carried out to prove the effectiveness of the proposed method.
Multi-focus Image Fusion Based on Twin-generation Differential Evolution and Adaptive Block Mechanism
CAO Chun-hong, ZHANG Jian-hua and LI Lin-feng
Computer Science. 2016, 43 (7): 67-72.  doi:10.11896/j.issn.1002-137X.2016.07.011
Abstract PDF(860KB) ( 517 )   
References | Related Articles | Metrics
Multi-focus image fusion algorithm based on block is an important algorithm in the field of image fusion.Multi-focus image fusion algorithm based on differential evolution takes the image block size as the population of diffe-rential evolution algorithm,after many evolutions,finally getting the image block with the best fusion image effect.In order to overcome the shortcomings that the standard algorithm will lose part of the information of parent population and result in slow convergence and smaller range of global search,and when the image resolution of the corresponding blocks are same,it will change the pixels of the source images,on the basis of the multi-focus image fusion algorithm which is based on differential evolution algorithm,a new fusion algorithm was proposed by introducing the twin-generation mechanism and adaptive block mechanism.This algorithm generates two progeny populations during evolution,keeps the information of parent population to the greatest extent,expands the global search range and improves the convergence performance.When the image resolution of the corresponding blocks are the same,it cuts the image block into smaller blocks and compars their resolution,then gets a better fused image and will not change the pixel of the source images.Experimental results show that the improved algorithm can get a better fused image than the former algorithm and has better convergence performance.
Fast Object Recognition Method Based on Objectness
LIU Tao, WU Ze-min, JIANG Qing-zhu, ZENG Ming-yong and PENG Tao-pin
Computer Science. 2016, 43 (7): 73-76.  doi:10.11896/j.issn.1002-137X.2016.07.012
Abstract PDF(936KB) ( 528 )   
References | Related Articles | Metrics
In order to solve the poor real-time performance in object recognition,a fast object recognition method was proposed based on objectness.First,binarized normed gradients algorithm is used for a test image to get objectness eva-luations.Then,calculating with the objectness evaluations,a candidate bounding box is extracted.And next,deformable part model (DPM) algorithm is used to predict the object with image regions of the box,which can save time on gliding windows searching.Finally,a quickly expansion-shrinking procedure is used to modify the output boxes of DPM,improving accuracy.Experimental results on the challenging PASCAL VOC 2007 database demonstrate that the proposed method outperforms the state-of-art detection models in accuracy and almost twice faster than cascade DPM in instantaneity.
Semi-supervised Nonnegative Matrix Factorization Based on Graph Regularization and Sparseness Constraints
JIANG Xiao-yan, SUN Fu-ming and LI Hao-jie
Computer Science. 2016, 43 (7): 77-82.  doi:10.11896/j.issn.1002-137X.2016.07.013
Abstract PDF(1046KB) ( 726 )   
References | Related Articles | Metrics
Nonnegative matrix factorization (NMF) is a kind of matrix factorization algorithm under non-negative constraints .With the aim to enhance the recognition rate,a method called graph regularized and constrained non-negative matrix factorization with sparseness (GCNMFS) was proposed.It not only preserves the intrinsic geometry of data,but also uses the label information for semi-supervised learning and introduces sparseness constraint into base matrix.Finally,they are integrated into a single objective function.An efficient updating approach was produced and the convergence of this algorithm was also proved.Compared with NMF,GNMF and CNMF,experiments on some face databases show that the proposed method can achieve better clustering results and sparseness.
Image Classification Algorithm Based on Low Rank and Sparse Decomposition and Collaborative Representation
ZHANG Xu, JIANG Jian-guo, HONG Ri-chang and DU Yue
Computer Science. 2016, 43 (7): 83-88.  doi:10.11896/j.issn.1002-137X.2016.07.014
Abstract PDF(746KB) ( 565 )   
References | Related Articles | Metrics
Currently,in order to achieve high performance,most image classification methods require adequate training and learning process.However,problems such as scarcity of training samples and overfitting of parameters are often encountered.To avoid these problems,we presented a non-parameter learning algorithm under the framework of Naive-Bayes Nearest-Neighbor (NBNN),where non-negative sparse coding,low rank and sparse decomposition and collaboration representation are jointly employed.Firstly,non-negative sparse coding combined with max pooling is introduced to represent images,and local feature matrices of similar training image sets with low-rank characteristic are generated.Secondly,two kinds of visual dictionary with category labels are constructed by leveraging low rank and sparse decomposition to make full use of the correlation and diversity of images with the same category label.Lastly,test images are represented based on collaboration representation for classification.Experimental results demonstrate effectiveness of the proposed algorithm.
Improved Bacterial Foraging Optimization Algorithm Used for Multi-level Threshold Segmentation Based on Exponent Entropy
ZHANG Xin-ming, TU Qiang and LIU Yan
Computer Science. 2016, 43 (7): 89-94.  doi:10.11896/j.issn.1002-137X.2016.07.015
Abstract PDF(1050KB) ( 464 )   
References | Related Articles | Metrics
In view of the characteristics of ordered positive integer programming of multi-level segmentation methods,an improved bacterial foraging optimization(IBFO) algorithm used for multi-level threshold segmentation based on exponent entropy was proposed in this paper.Firstly,a chemotactic step mechanism of the standard bacterial foraging optimization(SBFO) algorithm is changed into a dynamic chemotactic step approach to improve self-adaptation.Secondly,the original elimination-dispersal operator is replaced with a new one based on combining random mutation and dynamical local mutation,and the random mutation is used in the first phase to enhance the global search ability and the dynamical mutation is used in the second phase to improve local search performance.Thirdly,the communication mechanism of SBFO is abandoned to accelerate the running speed of the algorithm.Finally,IBFO is further modified to fit for the multi-level threshold segmentation based on exponent entropy.Experimental results show that the proposed method has better optimization performance with less computation time compared to SBFO,MBFO and IPSO.
Novel Image Segmentation Algorithm via Sparse Principal Component Analysis and Adaptive Threshold Selection
LU Tao, WAN Yong-jing and YANG Wei
Computer Science. 2016, 43 (7): 95-100.  doi:10.11896/j.issn.1002-137X.2016.07.016
Abstract PDF(1029KB) ( 563 )   
References | Related Articles | Metrics
Image segmentation is a fundamental problem in machine vision.Image segmentation algorithm based on threshold depends on the parameter adjustment,which is vulnerable to local minimum value and needs a lot of time.It reduces the quality and efficiency of segmentation algorithm.In order to realize the adaptive threshold selection in the process of image segmentation,a novel image segmentation algorithm via adaptive threshold selection and sparse principal component analysis was proposed.According to the content of the image,the algorithm removes the noise with the image noise level obtained by the sparse principal component analysis.The global segmentation threshold is obtained by the main region of the image based on 2D histogram.Then the local segmentation threshold is obtained by local details of image based on moving average method.Finally,the global threshold segmentation and the local threshold segmentation image are combined to obtain the best segmentation results.The simulation and experimental results on Berkeley data set show that the algorithm has an advantage on the accuracy edge of image segmentation and robustness to noise compared to current frontier algorithm.It has better segmentation performance on subjectivity and objectivity,and improves the quality of image segmentation.
Optimal AODV Routing Protocol Based on Multi-objective and Ant Colony Optimization for Mobile Ad Hoc Network
LU Ying and KANG Feng-ju
Computer Science. 2016, 43 (7): 101-105.  doi:10.11896/j.issn.1002-137X.2016.07.017
Abstract PDF(397KB) ( 489 )   
References | Related Articles | Metrics
For enhancing the stability of routes in mobile ad hoc network,an optimal AODV routing protocol was proposed,which selects best route by combining multi-objective optimization and ant colony optimization.First,we calcula-ted five metrics of each node including transmission distance,progression,transmission delay,direction and life time.Then,with the optimization target to make three metrics including transmission distance,transmission delay and direction mi-nimization,and two metrics including progression and life time maximization,built movement probability function of ant colony algorithm,and refreshed global pheromone with local best path.Finally,best next-hop node with maximum movement probability was selected and best route based on the AODV routing protocol was generated.The simulation results show that,compared with the routing protocol of AODV and EN-AODV,the new routing protocol has higher message delivery rate,and lower end-to-end average delay and routing cost meanwhile.
Network Coding Based Energy-aware Routing Protocol for Ad Hoc Network
WANG Zhen-chao, CAI Zhi-jie and XUE Wen-ling
Computer Science. 2016, 43 (7): 106-110.  doi:10.11896/j.issn.1002-137X.2016.07.018
Abstract PDF(403KB) ( 407 )   
References | Related Articles | Metrics
A network coding based energy-aware routing protocol (ERPNC) was presented to minimize the effect of limited energy supply of end-nodes in Ad Hoc network.ERPNC uses coding opportunities of nodes to reduce the energy consumption by matching rates of date flows,and predicts remaining lifetime of nodes by using residual energy and ener-gy consumption speed of nodes.ERPNC presents a new routing evaluation function and a new routing discovery strate-gy by combining the total energy consumption of the path and the remaining lifetime of nodes.Moreover,a local routing maintaining strategy was introduced to decrease the occurrence of route interruption and packet retransmission.Simulation results show that,comparing to other routing protocols,ERPNC performs more effectively on decreasing transmitting energy consumption,balancing network energy consumption,prolonging network lifetime and improving network throughput.
Network Traffic Prediction Algorithm Based on Vector Space Reconstruction
ZHANG Tao and ZHANG Ying-jiang
Computer Science. 2016, 43 (7): 111-114.  doi:10.11896/j.issn.1002-137X.2016.07.019
Abstract PDF(431KB) ( 436 )   
References | Related Articles | Metrics
There is a data storage covert channel between the client and the server,and the network traffic on the channel needs to be accurately predicted,which can avoid network congestion and improve network traffic scheduling and management ability.In the traditional method,the linear time series analysis method is used to predict the network traffic,which can not accurately reflect the nonlinear characteristic information,and the prediction accuracy is not high.A network traffic prediction algorithm was proposed based on nonlinear time series analysis and vector space reconstruction.The phase randomization process makes the network traffic data discrete analysis,and the network traffic time series analysis model is decomposed into the statistics of multiple nonlinear components.The self correlation function is used to obtain the vector space reconstruction time delay,and the mutual information minimum embedding dimension algorithm is used to obtain vector space embedding dimension of network flow sequence,which realizes the vector space reconstruction of flow sequence.In the high dimensional vector space,the high order spectral characteristics of the network traffic are extracted,the accurate prediction of the network traffic is realized.Simulation results show that the proposed algorithm can effectively simulate the nonlinear state characteristics of the traffic sequence,the dynamic tracking performance of the traffic state is better,and the prediction error is lower than the conventional method.
Architecture Design and Implementation of Intelligent Monitoring System in Unattended Operation Regions
LI Hui, ZHANG Ke and XU Liang
Computer Science. 2016, 43 (7): 115-119.  doi:10.11896/j.issn.1002-137X.2016.07.020
Abstract PDF(655KB) ( 460 )   
References | Related Articles | Metrics
In this paper,a new detection system based on intelligence detection was proposed due to target monitoring acquirements in unattended operation regions and abominable environment.It is an innovation application of existing methods.The system architecture and design (including frame structure,software and hardware design) were described in detail and related tests were carried out.Experimental results show that the system meets the application of most of the network scenarios.
Stability Analysis for Predicting LWDF Scheduling Algorithm in M-WiMAX
HU Yong-dong
Computer Science. 2016, 43 (7): 120-124.  doi:10.11896/j.issn.1002-137X.2016.07.021
Abstract PDF(500KB) ( 472 )   
References | Related Articles | Metrics
As a standard 4G mobile wireless network,mobile worldwide interoperability for microwave access (M-WiMAX) has a perfect service quality guarantee mechanism,and a packet scheduling algorithm is one of the core mecha-nisms of the service quality guarantee.There is a poisson’s arrival stream of multiple users.Markov chain was used to model a wireless time-varying channel.Thus the M-WiMAX network in PMP mode was modeled as a M/G/1 queuing system.And then,the channel capacity region was derived.The stable region was calculated,as well as that the Pre-LWDF scheduling algorithm has the packet level stability was proved with Lyapunov drift stability theory.Finally,M-WiMAX network simulation environment was built on NS2 simulation platform to verify the stability of Pre-LWDF scheduling algorithm.The simulation results show that the Pre-LWDF algorithm has the packet level stability in M-WiMAX network.
Energy-hole Avoidance Algorithm for WSN Based on Long-link Competition Mechanism
ZHAO Xiang-ning
Computer Science. 2016, 43 (7): 125-130.  doi:10.11896/j.issn.1002-137X.2016.07.022
Abstract PDF(750KB) ( 372 )   
References | Related Articles | Metrics
In wireless sensor networks,the nodes near the Sink have heavier workload than other nodes,and their energy consumes much faster.This phenomenon leads to “energy hole”,which shortens the lifetime of the entire sensor networks.In order to solve the energy hole problem,this paper presented a k-leader competition algorithm based on long-link competition mechanism to prolong the sensor networks’ lifetime.K-leader competition algorithm moves a portion of workload of the nodes which are 1-hop from Sink to the nodes that are further from Sink. At the same time,the k-leader switch algorithm makes the leader nodes swap in or out based on their energy consumption,in order to achieve load balancing.This paper analyzed the optimal value of leader nodes’ number k.The simulation result verifies k-leader algorithm’s performance in aspect of networks’ lifetime and load balancing.
Trustworthiness Analysis of Digital Evidences Based on Chain of Evidences
ZHAO Zhi-yan and SHI Wen-chang
Computer Science. 2016, 43 (7): 131-135.  doi:10.11896/j.issn.1002-137X.2016.07.023
Abstract PDF(466KB) ( 529 )   
References | Related Articles | Metrics
Computers are necessary in our lives along with the popularization of information technology,and they are also important tools in criminal activities.Digital evidences often play a major role in the judicial case or trial,but digital evidences have usually been questioned in the courts because they are easy to forge.To prove the trustworthiness of di-gital evidences is a big challenge of forensic research.In this paper,we put forward a framework to analyze trustworthiness of digital evidences based on chain of evidences by means of building chain to access associated evidences and ju-dging the consistency among evidences to deduce the their trustworthiness.
Primary and Secondary Webpage Information Hiding Algorithm Based on Invisible Characters
WENG Chao, ZHOU Liang and DING Qiu-lin
Computer Science. 2016, 43 (7): 136-140.  doi:10.11896/j.issn.1002-137X.2016.07.024
Abstract PDF(1039KB) ( 474 )   
References | Related Articles | Metrics
Current domestic and overseas webpage information hiding algorithms mainly focus on how to improve the hiding efficiency and expand the hidden capacity,but they neglect the security of information hiding algorithm.For this reason,the paper put forward a primary and secondary webpage information hiding algorithm based on invisible characters,in which information hiding is realized by the invisible characters based on ASCII coding as the primary method,and interference of information detection is realized by changing the letters of tag,embedding invisible characters unrelated to hidden information,changing the sequence of tag attributes as the secondary method.The experimental result shows that PSWIH algorithm is better than the method based on changing the letters of tag,changing the sequence of tag attributes,embedding invisible characters in non-perceptibility,extractability and robustness.
P2P Botnet Detection Based on Permutation Entropy and Multi-sensor Data Fusion on Decision Level
SONG Yuan-zhang
Computer Science. 2016, 43 (7): 141-146.  doi:10.11896/j.issn.1002-137X.2016.07.025
Abstract PDF(534KB) ( 502 )   
References | Related Articles | Metrics
Aiming at the problems of the existing P2P botnet detection methods,a novel P2P botnet detection algorithm based on the permutation entropy and the multi-sensor data fusion on the decision level was proposed.Firstly,it builds the abnormalities detection sensor and the reasons of abnormalities distinguishing sensor.The former sensor uses the permutation entropy to describe accurately the complexity characteristics of network traffic,which does not vary with the structure of P2P network,the P2P protocol and the attack.And the Kalman filter is used to detect the abnormalities of the complexity characteristics of network traffic.Considering that the traffic flow of Web applications is likely to affect the detection result,the latter sensor utilizes the features of TCP flow to solve the problem.Finally,the final result was obtained by fusing the results of two above sensors with the D-S evidence theory.The experiments show that the algorithm proposed in the paper is able to detect P2P botnet with high accuracy.
Research and Implementation of Fingerprint Identification Security Technology Based on ARM TrustZone
YANG Xia, LIU Zhi-wei and LEI Hang
Computer Science. 2016, 43 (7): 147-152.  doi:10.11896/j.issn.1002-137X.2016.07.026
Abstract PDF(1384KB) ( 928 )   
References | Related Articles | Metrics
The security of fingerprint technology itself is becoming increasingly prominent with its wide use in intelligent terminal device.With the security extension mechanism of ARM TrustZone,the technique and method of fingerprint identification security are put forward based on TrustZone to enhance the security of fingerprint identification for intelligent terminals.They provide trusted execution environment for the fingerprint identification program to ensure its safety in executing and prevent malicious code attacks.Meanwhile,the fingerprint data and fingerprint feature template are encrypted,the key is put into the secure area protected by TrustZone in order to prevent it from stealing.In addition,a secure channel for fingerprint data transmission is realized to further ensure the security of sensitive data transmission.At last,a prototype system is designed and implemented to verify the validity of the mentioned technique and method.The experimental results verify that the technique and method proposed in this paper are feasible.
Secure and Efficient Hybrid Key Management Mechanism in Heterogeneous WSN
WANG Gang, SUN Liang-xu, ZENG Zi-wei and YANG Dan
Computer Science. 2016, 43 (7): 153-156.  doi:10.11896/j.issn.1002-137X.2016.07.027
Abstract PDF(441KB) ( 391 )   
References | Related Articles | Metrics
Key management is crucially important for all security goals in WSNs.For solving the security vulnerabilities and heavy overhead problems of the existing key managements in heterogeneous WSN,a key management mechanism was put forward.The mechanism includes an ECC-based lightweight sigcryption algorithm which can not only cost less computation and communication,but also have better forward security.An entire cluster key management protocol is designed based on the above sigcryption that can ensure the communication security in the cluster and use cluster base key to generate the cluster key in each cluster.The employment of cluster base key can effectively avoid all SNs being invalid when CH is captured.To adapt the dynamic and scalability characteristic of WSN,the cluster key can be effectively refreshed and maintained by utilizing the cluster key refresh chain.In addition,a cluster key security management model was proposed which can self-adaptively refresh the cluster key according to the change of network environment thread and further improve the cluster key refreshment performance.The contrast results show that the presented mechanism is better than other existing mechanisms in terms of security and protocol performance.
Method for Detecting Wiretapping Attack in Satellite Network Based on Quantum Cryptography
HUANG Jing, XI Bo, LI Peng, ZHANG Fan and ZHAO Xin-jie
Computer Science. 2016, 43 (7): 157-161.  doi:10.11896/j.issn.1002-137X.2016.07.028
Abstract PDF(461KB) ( 522 )   
References | Related Articles | Metrics
The wiretapping attack is the foundation of many advanced attacks on satellite network.Since quantum cryptography will be widely used in satellite network,a method for detecting wiretapping attack in satellite network based on quantum cryptography was proposed.Based on the spatial distributions of the satellites,a layered and clustering wiretapping attack detection model was presented at first.During the process of wiretapping attack detection,when the neighboring satellites detect a wiretapping attack in their channel,the corresponding alarm messages will be first sent to the head node of their cluster through secure channels.At the same time,the corresponding alarm messages will be further fused and sent to the ground control center by the head node.The secure link of two satellites will be established according to the secure link establishing scheme from the ground control center.Eventually,the security and the effectiveness of the proposed method were analyzed.The proposed method can lay a foundation for further research on the techniques of satellite network protection.
Deniable Attribute-based Designated Confirmer Signature without Random Oracles
REN Yan
Computer Science. 2016, 43 (7): 162-165.  doi:10.11896/j.issn.1002-137X.2016.07.029
Abstract PDF(276KB) ( 440 )   
References | Related Articles | Metrics
In this paper,we first proposed a deniable attribute-based designated confirmer signature’s model without random oracles.In this signature scheme,both the signer and the designated confirmer can run the same protocols to confirm a valid designated confirmer signature or disavow an invalid signature.Finally,the proof of correctness and security in the standard model is provided.Analytical results show that this scheme obtains the advantages of unforgeability and invisibility.
H Boolean Functions with Divided into Two Parts and High Nonlinearity Boolean Functions
HUANG Jing-lian, WANG Zhuo and LI Juan
Computer Science. 2016, 43 (7): 166-170.  doi:10.11896/j.issn.1002-137X.2016.07.030
Abstract PDF(432KB) ( 515 )   
References | Related Articles | Metrics
Using the derivative of the Boolean functions and the e-derivative defined by ourselves as research tools,we studied the cryptographic properties of a class of H Boolean function which satisfy one degree propagation and are divi-ded into the product of two subfunctions,including nonlinearity,correlation immunity and algebraic immunity and so on.We achieved the relationship between the correlation immunity of this kind of H Boolean function and the two subfunctions,and also arrived at a conclusion on the correlation immunity of this kind of H Boolean function which can reach n2 -1.Moreover,we obtained the relationship between the lowest algebraic degree annihilator of this kind of an H Boolean function and the two subfunctions.Further,using e-derivative and derivative of a Boolean function,we constructed a cluster of H Boolean function which has the nonlinearity 2n-2+2n-3,the correlation immunity and 2-order algebraic immunity from obtained H Boolean functions.In this way,we resolved the problem of improving the nonlinearity of a Boolean function,and the existence problem of a Boolean function having higher nonlinearity,propagation,correlation immunity and higher algebraic immunity.
Software Defect Prediction Model Based on GMDH Causal Relationship
ZHANG De-ping, LIU Guo-qiang and ZHANG Ke
Computer Science. 2016, 43 (7): 171-176.  doi:10.11896/j.issn.1002-137X.2016.07.031
Abstract PDF(485KB) ( 542 )   
References | Related Articles | Metrics
Software defect prediction is an important aspect in the field of software reliability research.In this paper,we presented a software defect prediction model based on GMDH networks and causal test theory.The model selects the software metrics with the defect causal relationship by learning Granger test ideas and uses the GMDH network which can check the non-linear causality between multiple factors of software defect.Finally,based on two real software failure data sets,we designed an experiment to compare the proposed method with the Granger test software defect prediction model.The experiment results show that the proposed model is more effective and efficient than Granger test software defect prediction model.
Goal Oriented Approach for Analyzing Mobile Business Processes
LIU Chun, LIU Yong, WANG Ya-qian and HAN Dao-jun
Computer Science. 2016, 43 (7): 177-179.  doi:10.11896/j.issn.1002-137X.2016.07.032
Abstract PDF(356KB) ( 717 )   
References | Related Articles | Metrics
With the increasing popularity of mobile Internet,how to improve the business process to better meet users’ requirements challenges most enterprises.However,there is still a lack of effective approaches to address which business processes to be improved and how to improve them.For these issues,this paper proposed a goal oriented approach for analyzing mobile business processes.Based on the assumption that each business process is for meeting one users’ requirement,it proposes to build the mapping between the users’ requirements model and the business process model,and improves the business process model based on analyzing the changes of users’ requirements.The business processes of a hospital were used to illustrate the feasibility of the proposed approach.
Fault Tree Generation Method Based on UML Class Diagram and Activity Diagram
XU Hui, YAN Xue-feng and ZHOU Yong
Computer Science. 2016, 43 (7): 180-185.  doi:10.11896/j.issn.1002-137X.2016.07.033
Abstract PDF(741KB) ( 483 )   
References | Related Articles | Metrics
Aiming at the fault tree generated from UML activity diagram which can only reflect the behavior stream fault and can’t reflect the static fault,a method was proposed by using activity diagram combined with class diagram.On the basis of the original activity diagram,class diagram is used to describe the system static state information,designing transformation rules from activity diagram and class diagram to the fault tree model to transform the activity diagram dynamic behavior information and class diagram static state information into fault node elements.Based on the transformation rules,the algorithm is designed to reversely traverse activity diagram and class diagram to top-down generate fault tree.Modeling to generate fault tree indicates that the fault tree generated by UML activity diagram combined with class diagram model can reflect the system behavior fault information and static state,providing a new effective way in the generating the fault tree.
Software Defect Prediction Model Based on Adaboost Algorithm
XIONG Jing, GAO Yan and WANG Ya-yu
Computer Science. 2016, 43 (7): 186-190.  doi:10.11896/j.issn.1002-137X.2016.07.034
Abstract PDF(437KB) ( 683 )   
References | Related Articles | Metrics
A new software defect prediction method was proposed in this paper,which used Adaboost cascade classifier as its prediction model.The principle of Adaboost algorithm is to train multiple weak classifiers and combine them into another stronger cascade classifier,which can avoid over-fitting problem effectively.In this paper,comparative experiments based on NSNA software defect data sets are carried out between the original BP network and Adaboost with the weak classifier of BP network.The experimental results show that,the software defect prediction model based on Adaboost cascade classifier can improve the prediction performance significantly.
Local Pattern Query from Gene Expression Data
JIANG Tao, LI Zhan-huai, SHANG Xue-qun, CHEN Bo-lin and LI Wei-bang
Computer Science. 2016, 43 (7): 191-196.  doi:10.11896/j.issn.1002-137X.2016.07.035
Abstract PDF(607KB) ( 573 )   
References | Related Articles | Metrics
Local pattern mining plays an important role in gene expression data analysis.One classical model in local pattern mining is order-preserving subMatrix (OPSM),which captures the general tendency of subset of genes in subset of conditions.With the development of high-throughput gene microarray techniques,it produces massive of gene expression datasets.In this situation,it is urgent to design high performance algorithms.Most of the existing methods are batch mining technique,even though it can be addressed by query method,the comprehensiveness and behaviors still should be improved.To make data analysis efficient and accurate,we first proposed a prefix-tree based indexing method for gene expression data,then gave a column keyword based OPSM query methods.It uses index and search method instead of batch mining to query positive,negative and time-delayed OPSMs.We conducted extensive experiments and compared our method with existing methods.The experimental results demonstrate that the proposed method is efficient and scalable.
Efficient and Dynamic Data Management System for Cassandra Database
WANG Bo-qian, YU Qi, LIU Xin, SHEN Li, WANG Zhi-ying and CHEN Wei
Computer Science. 2016, 43 (7): 197-202.  doi:10.11896/j.issn.1002-137X.2016.07.036
Abstract PDF(525KB) ( 562 )   
References | Related Articles | Metrics
Cassandra is one of the universal databases,and it’s also specified as the top level project by the Apache.For the Cassandra distributed database system,a large number of write requests will cause excessive and dispersed SStable structures and high data redundancy,causing low efficiency to the user read requests.This problem can be solved by the local data consolidation mechanism triggered automatically by the system or by the overall data consolidation mechanism triggered by the human intervention.However,on one hand,the irrational timing automatical partial merger process will seriously reduce the performance of the read operation requested by the user;on the other hand,the long-time human overall data consolidation process will occupy a large number of system resources,which will severely restrict the overall performance of the corresponding system.To solve this problem,we presented an efficient and dynamic management mechanism.Firstly,appropriate implementation strategies are developed to the time of the merger,the file involved in the merger and the merge process by monitoring system environment and managing the data according to the time and size.Secondly,the impact of the consolidation process on system performance is reduced by reducing the data combination time through specific optimization methods.The final result shows that this data management system optimizes the Cassandra database consolidation process and ultimately enhances the response speed for the read request.
Full-temporal Index of Moving Objects Based on Distributed Main Memory Database
ZHOU Xiang-yu, CHENG Chun-ling and YANG Yan-ying
Computer Science. 2016, 43 (7): 203-207.  doi:10.11896/j.issn.1002-137X.2016.07.037
Abstract PDF(521KB) ( 461 )   
References | Related Articles | Metrics
Due to the traditional index of moving objects ignores the cache-conscious of index nodes,only the two-layer memory/disk hierarchy is optimized.Thus,this paper proposed a novel full-temporal index structure named DFTBx-tree based on the distributed main memory database.The optimization of new index structure includes the Cache,the main memory and the hard disk.The size of index nodes is set according to many conditions such as Cache line,the number of instructions and the number of TLB mismatches.Meanwhile,the size of historical data migration nodes is designed a ccording to the size of the disk data pages.Therefore,the cache and the main memory can read the data of interior node or leaf node at a time,to avoid the delay caused by multiple data reads.Moreover,the full-temporal index of mo-ving objects is supported by historical data which is linked through a migration chain.Compared with other algorithms,the experiment shows that DFTBx-tree has higher efficiency in query and update operations.
Acquiring Relationships Between Geographical Entities Based on Semantic Grammar
ZHOU Qi, LU Ye, LI Ting-yu, WANG Ya, ZHANG Zai-yue and CAO Cun-gen
Computer Science. 2016, 43 (7): 208-216.  doi:10.11896/j.issn.1002-137X.2016.07.038
Abstract PDF(824KB) ( 785 )   
References | Related Articles | Metrics
Geographic information and data are important components of the objective knowledge world.Geographic information extraction (GIE) aims to extract various relationships between geographic entities from unstructured geographic text.A novel method for GIE was proposed,which depends on semantic parsing with a geographic grammar.First,GeoRSG (Geographical Relationship Semantic Grammar) was constructed,which reflects geographic relationships in Chinese written language.GeoRSG also reflects a classification of relationships between geographic entities,and uses a rule-based method to depict linguistic expressions of relationships in the text.Then,we implemented a parser,called the GeoRSG Parser,which is used to obtain the geographical knowledge in the form of the predicate with the help of GeoRSG.Experiments indicate that the method can obtain 81 triples relationships and 816 binary relationships between geographic entities from 1000 statements,and has achieved a precision rate of 88.85%.
Big Data-driven Complaint Prediction Model
ZHOU Wen-jie, YANG Lu and YAN Jian-feng
Computer Science. 2016, 43 (7): 217-223.  doi:10.11896/j.issn.1002-137X.2016.07.039
Abstract PDF(612KB) ( 675 )   
References | Related Articles | Metrics
Because of fierce competition in telecommunication (telco) industry,it is crucial to reduce customer complaint rate and improve customer services to improve competitive advantages for telecommunication companies.Thus,accurately predicting the complaint behaviors to reduce the complaint rates becomes one of the most important tasks for telco operators.Traditional complaint prediction models only focus on classification algorithms and artificial feature selection and do not release the full power of telco big data.In this paper,we proposed a big data-driven complaint prediction model on the Hadoop/Spark platform using efficient parallel random forests.To better explore the performance of the proposed method,we performed feature engineering not only on all data from business support system (BSS) and operations support system (OSS),but also on those from the customer service records (CSR).Moreover,several useful graph-based features and second-order features between the relationship of users were designed and used to enhance the predictive performance.Experiment results based on the practical data of the telco operator in Shanghai show that using more data sources and high dimension data to train the complaint prediction models make the prediction accuracy higher than the state-of-the-art algorithms.Based on the result,we took comfort measures on target users,which can make the lo-wer complaint rate of users and bring significant business value to the operator.
Taxonomy Construction Based on User Self-describing Tags
LIU Su-qi, BAI Guang-wei and SHEN Hang
Computer Science. 2016, 43 (7): 224-229.  doi:10.11896/j.issn.1002-137X.2016.07.040
Abstract PDF(881KB) ( 452 )   
References | Related Articles | Metrics
Knowledge on schema level is vital for the development of semantic Web.However,the number of schema knowledge is limited in current linking open data (LOD).To optimize the issue,this paper proposed an approach for constructing a taxonomy using user self-describing tags in social network.This approach first designs a tag blocking algorithm based on search engine to partition tags into the same block,which describes the same topic.Then,it uses a label propagation algorithm based on the semi-supervised learning to detect hypernym relation between tags in the same block.Finally,it applies a greedy algorithm based on heuristic rules to construct a taxonomy.A large scale and high-quality taxonomy can be constructed after applying the proposed approach in social Web sites.The experimental results show that,compared with the existing related work,the proposed approach performs better in terms of precision,recall and F-score.
Model to Solve English Verb-Noun Collocation Errors
DU Yi-min, WU Gui-xing and WU Min
Computer Science. 2016, 43 (7): 230-233.  doi:10.11896/j.issn.1002-137X.2016.07.041
Abstract PDF(464KB) ( 523 )   
References | Related Articles | Metrics
English learners often make mistakes about verb-noun collocations.Through the analysis of verb-noun collocation errors in CLEC, a model was proposed to correct these mistakes made by Chinese learners in this paper.A libraryon verb-noun collocations was first built,a method was then put forward to measure the similarity between the collocations,and similarity between the target and the library was calculated to get a coarse similar-collocation set.After that,a classification was applied to filer out incorrect collocations,and the final candidate set would be ranked by a language model to obtain the correction suggestions.In the test data constructed by the BNC corpus,this method which combines similarity inference and contextual features has a significant effect on verb-noun collocation errors’ correction.
Cross-domain Sentiment Classification Based on Optimizing Classification Model Progressively
ZHANG Jun and WANG Su-ge
Computer Science. 2016, 43 (7): 234-239.  doi:10.11896/j.issn.1002-137X.2016.07.042
Abstract PDF(515KB) ( 433 )   
References | Related Articles | Metrics
Cross-domain sentiment classification has attracted more attention in natural language processing field.Given that tradition active learning can’t make use of the public information between domains and the bag of words model can’t filter these words not related with sentiment classification,a method of cross-domain sentiment classification based on optimizing classification model progressively was proposed.Firstly,this paper selected the public sentiment words as features to train classification model on the labeled source domain,then used the classification model to predict the initial category label for target domain and selected the texts with high confidence value as initial seed texts of the learning model.Secondly,we added the high confidence text and low confidence text to the training set at each iteration.Finally,the feature set was extracted to transform feature space based on the sentimental dictionary,evaluation collocation rules and assist feature words.The experimental results indicate that this method can not only improve the accuracy of cross domain sentiment classification effectively,but also reduce the manual annotation price to some extent.
Hospital Information System Continuous Use of Patients:An Empirical Study Based on TAM and ECT
DAI Yi-ling, GU Dong-xiao, LU Wen-xing and LIANG Chang-yong
Computer Science. 2016, 43 (7): 240-244.  doi:10.11896/j.issn.1002-137X.2016.07.043
Abstract PDF(475KB) ( 680 )   
References | Related Articles | Metrics
Based on the technology acceptance model(TAM) and the expectation-confirmation theory,this paper carried out an empirical study on the behavior intention of hospital information system continuous usage in the eyes of patients。The results show that perceived ease of use,perceived usefulness and expectation-confirmation all have significant positive influence on the behavior intention,and the perceived ease of use is the most important factor.Meanwhile,expectation-confirmation also has significant positive influence on the behavior intention mediated by the satisfaction of information systems and perceived usefulness.Some suggestions associated with the hospital informationization and medical information management were proposed.
Multi-clusters IB Algorithm for Imbalanced Data Set
JIANG Peng, YE Yang-dong and LOU Zheng-zheng
Computer Science. 2016, 43 (7): 245-250.  doi:10.11896/j.issn.1002-137X.2016.07.044
Abstract PDF(558KB) ( 433 )   
References | Related Articles | Metrics
When dealing with imbalanced data sets,the original IB method tends to produce clusters of relatively uniform size,resulting in the problem of unsatisfactory clustering effect.To solve this problem,this paper proposesd a multi-clusters information bottleneck (McIB) algorithm.McIB algorithm tries to reduce the skewness of the data distributions by under-sampling method to divide the imbalanced data sets into multiple relatively uniform size clusters.Entire algorithm consists of three steps.First,a dividing measurement standard is proposed to determine the sampling ratio parameter.Second,McIB algorithm preliminary analyses the data to generate reliable multi-clusters.At last,McIB algorithm merges clusters into one bigger size cluster according to the similarity between clusters and organizes multiple clusters representing the actual cluster to obtain the final clustering results.Experimental results show that the McIB algorithm can effectively mine the pattern resided in imbalanced data sets.Compared with other common clustering algorithms,the performance of the McIB algorithm is better.
PODKNN:A Parallel Outlier Detection Algorithm for Large Dataset
GOU Jie, MA Zi-tang and ZHANG Zhe-cheng
Computer Science. 2016, 43 (7): 251-254.  doi:10.11896/j.issn.1002-137X.2016.07.045
Abstract PDF(481KB) ( 563 )   
References | Related Articles | Metrics
In order to improve the outlier detection algorithm’s efficiency of dealing with large-scale data set,a parallel outlier detection based on K-nearest neighborhood was put forward.This algorithm can find the K-nearest neighborhood and calculate the degrees of outliers by using partitioning strategy for pretreatment of data sets,and then it merges the results and selects outliers.The algorithm is designed to suit for the MapReduce programming model to implement parallelization and improve the computational efficiency of dealing with large-scale data sets.The experimental results show that the PODKNN has the advantages of high speedup and good scalability.
Improved Density Peaks Based Clustering Algorithm with Strategy Choosing Cluster Center Automatically
MA Chun-lai, SHAN Hong and MA Tao
Computer Science. 2016, 43 (7): 255-258.  doi:10.11896/j.issn.1002-137X.2016.07.046
Abstract PDF(425KB) ( 726 )   
References | Related Articles | Metrics
A new density peaks based clustering method (CFSFDP) was introduced in the paper.For the problem that it is difficult to decide the cluster number with CFSFDP,an improved algorithm was presented.With a cluster center automatic choosing strategy,the algorithm search for the “turning points” with the trends of cluster center weight’s changing.Then we could regard a set of points whose weight is bigger than “turning points” as the cluster center.The error brought by ruling in the decision graph could be avoided with the strategy.Experiment was done to compare to DBSCAN and CFSFDP with 5 kinds of datasets.The results show that the improved algorithm has better performance in accuracy and robustness,and can be applied in clustering analysis for low dimension data.
Weighted Bipartite Network Recommendation Algorithm Based on Increasing Similarity Coefficient
LI Zhen-dong, LUO Qi and SHI Li-li
Computer Science. 2016, 43 (7): 259-264.  doi:10.11896/j.issn.1002-137X.2016.07.047
Abstract PDF(507KB) ( 528 )   
References | Related Articles | Metrics
The recommendation algorithm based on bipartite networks is a research hotspot in the personalized recommendation system,while the difficulty of research is how to make use of the users’ evaluation resources scientifically to work out an efficient and accurate recommendation for target users in the absence of rating data.Meanwhile,it has received sufficient attention of scholars.Therefore,a new recommendation algorithm was put forward with the monotonous saturation function as weight,and tangent of target users and other projects’ common rating numbers against the mean value of total users is used as traditional similarity coefficient.At the same time,after the coefficient gets adjusted,the similarity will be sorted in descending order,the set of the first K nearest neighbors of which can be utilized for target users’ recommendation.The experimental results prove that the revised algorithm improves the accuracy of re-commendation and reduces its complexity.
Classification of Multi-scale Functional Brain Network in Depression
CHENG Chen, GUO Hao and CHEN Jun-jie
Computer Science. 2016, 43 (7): 265-267.  doi:10.11896/j.issn.1002-137X.2016.07.048
Abstract PDF(381KB) ( 638 )   
References | Related Articles | Metrics
As a complex network analysis method,brain network has been widely accepted in the field of neuroimaging.According to the research,the scale of nodes in the brain has a major impact on the network topological properties.This paper used the resting state functional imaging data to construct brain networks for patients and normal controls respectively under five different node scales and compared variances of the network topological properties,and then selected four different algorithms to do the classification.The results show that the node scale can not only affect the topological properties,but also has a direct effect on the construction of classification model.Support vector machine (RBF kernel function) model shows the best classification results when the node scale is 250,the average accuracy is 83.18%.The research results have an important application value in the clinical diagnosis of depression,and provide a significant reference basis on the network nodes’ selection based on machine learning of brain network.
Unequally Spaced Linear Array Synthesis Using Modified Wind Driven Optimization Algorithm
REN Zuo-lin, TIAN Yu-bo and SUN Fei-yan
Computer Science. 2016, 43 (7): 268-274.  doi:10.11896/j.issn.1002-137X.2016.07.049
Abstract PDF(621KB) ( 415 )   
References | Related Articles | Metrics
Because of some shortcomings of traditional wind driven optimization (WDO) algorithm for the synthesis of unequally spaced linear antenna arrays,such as the bad convergence accuracy and bad local optimal searching capability,the WDO with wavelet mutation (WDOWM) algorithm was proposed.The modified WDO algorithm with a wavelet mutation operator was used to adopt randomization to rich population diversity.Using the modified algorithm to deal with the synthesis problems of multi-elements unequally spaced linear antenna arrays,second-order multi-factor and multi-level uniform design method was used to determine the algorithm parameter combinations.The simulation results show its convergence accuracy and speed are superior to the traditional WDO algorithm in the pattern synthesis of array antennas with low side-lobe level suppression and null control in specified directions.In addition,the performances of the proposed algorithm are superior to the particle swarm optimization (PSO) algorithm used in the cited references.These results suggest that the WDOWM algorithm has good performance,and it is suitable for the antenna synthesis problems.
Employing AS-FOA for Optimization of GRNN Network with Application to Financial Warning Research
WANG Ying-bo and CHAI Jia-jia
Computer Science. 2016, 43 (7): 275-280.  doi:10.11896/j.issn.1002-137X.2016.07.050
Abstract PDF(493KB) ( 426 )   
References | Related Articles | Metrics
The process of fruit fly optimization algorithm(FOA) optimizing complex problems easily falls into local optimum.In order to solve the problem,adaptive step fruit fly optimization algorithm (AS-FOA) was put forward.The improved FOA was used to find GRNN network optimal parameters,and financial data were used for the crisis warning to verify the feasibility of the algorithm.The algorithm gives fruit fly two random directions,meanwhile introduces two concepts,which are stability threshold and fitness step length factor,in order to define the flies’ active and steady state,thus effectively preventing local optimum-induced slow convergence and low accuracy in the process of searching the optimal parameters of GRNN by FOA.The experimental results show that AS-FOA can quickly find the best parameters of GRNN network and achieve higher warning accuracy after being applied to financial data.
Precise Identification of Seed Users Based on Information Flow in Big Data
XIE Yang-xiao-jie and ZHAO Ling
Computer Science. 2016, 43 (7): 281-284.  doi:10.11896/j.issn.1002-137X.2016.07.051
Abstract PDF(321KB) ( 495 )   
References | Related Articles | Metrics
Aiming at the precise identification of data seeds under big data,we analyzed two major factors which impact users to become seeds users:time priority and attribute characteristics,and two characteristics of the dissemination of seed information:propagation time difference and directionality.Accordingly,we proposed a method to quickly find the seed users.First,users are put into different groups by the property features.Through analyzing the time difference and SMS circulation among all groups,we can find out the dissemination of information flow,that is to say,direction.Thus the search range is gradually narrowed,and alternative seed is filtered through threshold.We established evaluation model tree,designed seed users evaluation system,and used this evaluation system to calculate the final score to find out the seed users.
Gait Recognition Algorithm Based on Hidden Markov Model
ZHANG Xiang-gang, TANG Hai, FU Chang-jun and SHI Yu-liang
Computer Science. 2016, 43 (7): 285-289.  doi:10.11896/j.issn.1002-137X.2016.07.052
Abstract PDF(753KB) ( 686 )   
References | Related Articles | Metrics
Human gait is the pattern of walking of human being.Recently,gait recognition has been a hot research point in many research fields.Specially,the distinction of gait segmentation plays a key role on gait recognition.The use of HMMs in this paper aimed at recognizing different gait segmentation by an encoder on knee-joint and a leg-mounted accelerometer.Firstly,the methods of data pre-processing and feature extraction were used.Secondly,a model based on HMM was presented for recognizing gait segmentation,including model structure,parameters training,gait recognition.Thirdly,performance evaluation of the gait recognition was conducted,obtaining total accuracy of 91.06%,proving that HMM can accurately recognize gait segmentation and has good performance.
Renal Cortex Segmentation Using Graph Cuts and Level Sets
SHI Yong-gang, TAN Ji-shuang and LIU Zhi-wen
Computer Science. 2016, 43 (7): 290-293.  doi:10.11896/j.issn.1002-137X.2016.07.053
Abstract PDF(1200KB) ( 491 )   
References | Related Articles | Metrics
Kidney segmentation is the key step for medical image analysis and non-invasive computer aided diagnosis.The region of kidney and renal cortex are extracted in order to compute the volume and thickness of the cortex.These measurements are used to assess the renal function and design the treatment planning.Based on the similarity between the consecutive slices of three dimensioal renal image,an automatic kidney and renal cortex segmentation algorithm with graph cuts and level sets was proposed in this paper.The slice with enough intensity contrast and high definition is taken as as the initial reference.Hough forest is applied in detecting the region of kidney to estimate its intensity distribution and acquire the energy function for the kidney segmentation.Then,mathematical morphology is used to achieve the rough contour of next slices.Based on the initial segmentation result,the initial contours are positioned and the level sets are used to partition the renal cortex.This processing will be continued until all sliced is segmented.The test results show that the proposed algorithm is effective to segment the kidney and renal cortex.
Video Denoising Method Based on Improved Dual-domain Image Denoising
QUAN Li, HU Yue-li, ZHU An-ji and YAN Ming
Computer Science. 2016, 43 (7): 294-296.  doi:10.11896/j.issn.1002-137X.2016.07.054
Abstract PDF(1110KB) ( 371 )   
References | Related Articles | Metrics
Image denoising continues to be an active research topic.Recently the proposed BM3D is based on block matching,which introduces visible artifacts in homogeneous regions,manifesting as low-frequency noise.This paper offered a hybrid method that is easy to implement and yet rivals BM3D in quality.The noise differentials were estimated using robust kernels in two spatial domains,one spatial range domain and one frequency range domain.The approach using robust estimators unifies spatial and wavelet domain methods.Video denoising based on temporal is highly effective,and the method is further demonstrated particularly to be suitable for video denoising.Comparing to the DCT,the value of PSNR of the proposed method is improved by about 1dB.
Fusion Algorithm of Multispectral and Panchromatic Image Using Guided Filter and Imaging System Characteristics
LI Xu-han, DONG An-guo and FENG Jian-hu
Computer Science. 2016, 43 (7): 297-302.  doi:10.11896/j.issn.1002-137X.2016.07.055
Abstract PDF(1032KB) ( 451 )   
References | Related Articles | Metrics
In order to improve the quality of multispectral and panchromatic image fusion,a fusion algorithm of multispectral and panchromatic image using the guided filter and imaging system characteristics was proposed.First of all,it adoptes the guide filter to establish the relationship between lower quality panchromatic image and multispectral image,and uses this relationship to interpolat the multispectral image.After that,multispectral and panchromatic images are proceeded division and NSCT transformation,and then the feature of LCCS,FOCC and image imaging system are combined for image region merging of high frequency coefficient.Finally,by NSCT inverse transformation,fusion image can be gotten.Numerical experiment shows that the algorithm not only retains the multispectral image spectrum information,but also injects the panchromatic image details into the fusion image as much as possible,improving the effect of the multispectral image fusion.
3D Model Retrieval Algorithm Based on Multi Feature Fusion
ZHOU Yan, ZENG Fan-zhi and YANG Yue-wu
Computer Science. 2016, 43 (7): 303-309.  doi:10.11896/j.issn.1002-137X.2016.07.056
Abstract PDF(1890KB) ( 501 )   
References | Related Articles | Metrics
For the problem of the single feature retrieval effectiveness in 3D model retrieval,in this paper we proposed three kinds of feature vector extraction algorithms of 3D model which include Extended Gauss Sphere(EGS)feature vector of describing the surface characteristics of the model,Radon Transform Spherical Distribution (RTSD) feature vector of reflecting the internal structure of the model,and the View Hierarchical Compressed Sensing (VHCS) feature vector of representing projection layer of the model.Secondly,we presented a weighted coefficient estimation method for multi feature fusion based on the sample model query result classification information entropy and the supervised learning process.Finally,multi feature fusion similarity measure between models was designed to complete the 3D model retrieval based on query sample.Simulation results show that three kinds of feature vectors proposed have better distinguish ability,recall and precision of multi feature fusion retrieval algorithm are improved obviously.
Double Search Colony Algorithm for Color Remote Sensing Image Edge Detection Based on Quaternion Representation
PU Guo-lin and QIU Yu-hui
Computer Science. 2016, 43 (7): 310-313.  doi:10.11896/j.issn.1002-137X.2016.07.057
Abstract PDF(628KB) ( 409 )   
References | Related Articles | Metrics
With the advent of big data of remote sensing image,common color remote sensing image edge detection has a greater amount of computation,lower speed,worse effect and more obvious shortcomings.We used a quaternion vector to represent a color pixel,improved artificial bee colony algorithm for single search equation,expanded employed bee search area,and added Levy flight factor in the equation of onlookers,proposed an artificial bee colony algorithm based on dual-search equation.Experimental results show that improved double search artificial bee colony algorithm for remote sensing image edge detection not only significantly reduce the amount of computation of edge detection,but also reduce the noises in the color remote sensing image.The proposed algorithm can be effectively applied to obtain the re-cognition target from remote sensing image.
Driver Fatigue Detection Based on IMF Time-frequency Features of Pulse Signal and SVDD
JIANG Jian-chun, JIANG Li, TANG Hui, ZHANG Zhuo-peng and WU Xue-gang
Computer Science. 2016, 43 (7): 314-318.  doi:10.11896/j.issn.1002-137X.2016.07.058
Abstract PDF(444KB) ( 583 )   
References | Related Articles | Metrics
To address the problems of traditional time-frequency features’ being hard to characterize the non-stationary signal (e.g.,pulse signal) and the fewer samples of driver fatigue pulses,an approach was proposed to detect driver’s fatigue based on the time-frequency features of intrinsic mode function (IMF) of pulse signal and support vector data description (SVDD).This approach makes full use of the advantages of the IMF’ being suitable for characterizing non-stationary signal and SVDD’ being good at addressing the classification with unbalanced samples.First,the pulse signals are decomposed by using empirical mode decomposition method to obtain multiple IMF components.Then,the time-frequency features of IMF are extracted,which consists of the normalized energy,the maximum instantaneous frequency and the average of instantaneous amplitude.Finally,the SVDD classifier is used to detect the fatigue status of drivers and give corresponding fatigue level.Comparison experiments suggest that this approach can effectively detect the fatigue status of drivers.
Contour Detection Model Based on Color Opponent Receptive Field
WU Jing-li and LIU Yuan-jing
Computer Science. 2016, 43 (7): 319-323.  doi:10.11896/j.issn.1002-137X.2016.07.059
Abstract PDF(943KB) ( 506 )   
References | Related Articles | Metrics
It is very significant for contour detection to be widely used in such fields as target recognition,image segmentation and pattern recognition,etc.According to the principle of visual biology,researchers put forward contour detection methods for gray image,which achieved good results.However,the color information can represent most of the image information,and the role of if cannot be ignored when detecting contour.The CO model proposed by Kaifu Yang can extract the target contour from the image,but the execution efficiency of the model still need to be improved.In this paper,a contour detection model CRFM (Color-opponent Receptive Field) was proposed.According to visual information processing mechanism,the model simulates the response from receptive field of the retinal ganglion cells and the one from the lateral geniculate nucleus cells respecitvely.In addition,CRFM uses the partial derivative difference between two Gaussian functions having different scales to simulate the response of the double opponent receptive field in the primary visual cortex and the visual characteristics.Because the filter,which simulates the double opponent receptive field,usually produces small value,the speed is improved for computing the convolution between the filter and the image information,and the execution cost is decreased.BSDS300 database was used in the experiments.Results show that the CRFM model can obtain better performance,and has higher execution efficiency than the CO model which is very practical in realisteic application.
Research on Pedestrian Detection Method Based on Laser Scanning
ZHANG Zhi-gang, SUN Li-cai and WANG Pei
Computer Science. 2016, 43 (7): 328-330.  doi:10.11896/j.issn.1002-137X.2016.07.061
Abstract PDF(1134KB) ( 623 )   
References | Related Articles | Metrics
The pedestrian detection in environment is based on the laser sensor.By laser scanning,the point cloud data can be acquired from current environment,which will be analyzed later.The pedestrian detection mentioned here is on the basis of the point cloud data,which are appointed by every frame in the initial phase,and then the exact point cloud of different size of human body is estimated.We applied combinations of angular resolution,scanning frequency,processing route,and processing speed to check the accuracy.We also analyzed the parameters from the laser sensor and the detecting performance in different directions to get a optimal plan for the settings of sensor.This paper aims at measu-ring the possibility of using a laser gauging sensor in the pedestrian detection,and suggesting a preliminary procedure for the detection.