Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 43 Issue 8, 01 December 2018
Review and Prospect of Server Monitoring Technology
WANG Hui-qiang, DAI Xiu-hao, LV Hong-wu and LIN Jun-yu
Computer Science. 2016, 43 (8): 1-6.  doi:10.11896/j.issn.1002-137X.2016.08.001
Abstract PDF(602KB) ( 474 )   
References | Related Articles | Metrics
With the extensive application of server in various fields and the scale of server being much huger,server monitoring technology plays a key role in ensuring long-term and effective working of server.This paper began with analyzing the requirement of server monitoring,and then summarized the related monitoring technology,including heartbeat mechanism,IPMI,SNMP and virtualization technology etc.Besides,this paper described domestic and international main-stream server monitoring products,such as IBM Tivoli,HP OpenView,then contrasted and analyzed their functions.At last,this paper predicted the development of server monitoring technology,and presented a monitoring framework for high availability server.
Advances in Fixed-parameter Tractable Approximation Algorithms for NP-hard Problems
LIU Yun-long and CUI Meng-tian
Computer Science. 2016, 43 (8): 7-12.  doi:10.11896/j.issn.1002-137X.2016.08.002
Abstract PDF(576KB) ( 321 )   
References | Related Articles | Metrics
Fixed-parameter tractable approximation algorithm produces the approximation solution by using the approach of parameterized computation,which becomes one of the practical approaches to deal with NP-hard problems.In this paper,we investigated the fixed-parameter tractable approximation algorithms for various NP-hard problems according to their parameterized computational complexity.More specifically,we gave a survey on the fixed-parameter tractable approximation algorithms for the fixed-parameter tractable problems,the problems with unknown parameteri-zed computational complexity,and the W[t]-hard problems (t≥1) respectively.For each kind of problems,we summarized typical examples with their results obtained in recent years,analyzed the main techniques employed in the algorithms and discussed some issues to be further studied.
Survey of Ghost Detection and Removal Methods in HDR Imaging Technology
HU Sheng-nan, ZHANG Wei, LIU Kan and GU Jian-jun
Computer Science. 2016, 43 (8): 13-18.  doi:10.11896/j.issn.1002-137X.2016.08.003
Abstract PDF(1814KB) ( 1147 )   
References | Related Articles | Metrics
Because of its extensive practical and theoretical values,the high dynamic range imaging (HDRI) technique has become a hot topic in the areas of image processing.How to deal with the ghost produced in the process has attracted many researchers’ attentions.This paper respectively categorized ghost detection and removal methods according to whether the moving objects belong to the shooting target and operation domain into two categories.The ghost detection was classified into radiation field detection and image domain detection.The removal methods was classified into radiation field removal and image domain removal.For each category,the state-of-the-art research achievements were reviewed comprehensively.We also analyzed the characteristics of different algorithms.Thoughts and foresights of this field were given at the end of this paper.
Data Dimension Reduction Method Based on PCA for Monitoring Data of Virtual Resources in Cloud Computing
HONG Bin, DENG Bo, PENG Fu-yang, BAO Yang and FENG Xue-wei
Computer Science. 2016, 43 (8): 19-25.  doi:10.11896/j.issn.1002-137X.2016.08.004
Abstract PDF(596KB) ( 317 )   
References | Related Articles | Metrics
Cloud computing has become increasingly popular and cloud providers face serious problems as resource monitoring tasks become more and more complicated.As an effective approach to enhancing availability and reliability of cloud infrastructures,state monitoring system aims to detect anomalous state in cloud by analyzing monitoring data.To reduce the processed data volume,we proposed a data dimension reduction method based on PCA(Principal Components Analysis)with high fidelity in this article.The results of experiments carried on VICCI cloud service platform show that,our method can select the kernel metrics from hundreds of monitoring data types and sharply reduce the computing overload incurred by state monitoring tasks.
Multilayer Topology Structural Vulnerability Analysis Algorithm for Large-scale Distributed System
KUANG Xiao-hui, LI Jin and ZHAO Gang
Computer Science. 2016, 43 (8): 26-29.  doi:10.11896/j.issn.1002-137X.2016.08.005
Abstract PDF(326KB) ( 257 )   
References | Related Articles | Metrics
Structural vulnerability is a typical type of vulnerabilities on large-scale distributed system.Aiming at the complex relation between entities and redundancy mechanism in large-scale distributed system,a new multi-topology model was proposed.Based on entity topology model structural vulnerability analysis algorithm,a new LDS low layer structural vulnerability analysis algorithm was put forward.The algorithm was verified by implementation and evaluation.
Networked Operational Information Flowing Dynamic Hypergraph Model Based on Motif
YANG Ying-hui, LI Jian-hua, NAN Ming-li, CUI Qiong and WANG Hong
Computer Science. 2016, 43 (8): 30-35.  doi:10.11896/j.issn.1002-137X.2016.08.006
Abstract PDF(2419KB) ( 345 )   
References | Related Articles | Metrics
Dynamic,exact,direct and elaborate information flowing model is important guarantee for analyzing operational information alternation relationship and exploring information empowerment mechanism.Aiming at the problem of constructing information flowing model for networked operations,firstly,several correlative concepts,such as operational node and information flow,were defined.Three kinds of basic information flow motif for intelligence,command & control and striking were abstracted and different functional combinations were discussed.Secondly,correlative hypergraph theory was expatiated,and the operational task hierarchical decomposition method and hypergraph expression form were studied.And then,general flow and steps for constructing operational information flowing dynamic hypergraph model were given.Finally,taking aviation aggressive operations as an example,feasibility and validity of the model were verified.
Algorithm Research on Influence of Node Based on Local k-shell
LUO Ai-min and YANG Wei-sheng
Computer Science. 2016, 43 (8): 36-38.  doi:10.11896/j.issn.1002-137X.2016.08.007
Abstract PDF(227KB) ( 310 )   
References | Related Articles | Metrics
It has important application value to identify the most important nodes in complex network transmission dynamics.Aiming at the problem of complex network influence analysis,a method of measuring influence based on local k-shell was provided in this paper,which considers the nuclear value of neighbor nodes.Finally,the method was verified to be rational and valid by using the data set of Email scale-free network as experimental object.It is foundation for the application research of influence of complex network.
Unknown Bit-stream Protocol Classification Model with Zero-knowledge
ZHANG Feng-li, ZHOU Hong-chuan, ZHANG Jun-jiao, LIU Yuan and ZHANG Chun-rui
Computer Science. 2016, 43 (8): 39-44.  doi:10.11896/j.issn.1002-137X.2016.08.008
Abstract PDF(478KB) ( 276 )   
References | Related Articles | Metrics
To solve the difficult problem of unknown bit-stream protocol identification with zero knowledge,a protocol classification model was proposed.Firstly,this model calculates the approximation of parameter K and the initial cluster center using the inherent features of bit-stream,then uses the improved K-Means to cluster data set into different clusters by specifying the parameter K and the initial center,and finally evaluates the results of clustering by a hybrid evaluation method based on information entropy.The clusters with good evaluation results can be marked and used to study further.Testing data set published by the Lincoln laboratory shows that unknown bit-stream protocols can be classified with high accuracy by this model,and the evaluation method based on information entropy is also useful and effective.
Improvement of Train Control Network Token Passing Algorithm Based on ARCNET
ZHANG Xin-you and WEI Jun-chao
Computer Science. 2016, 43 (8): 45-49.  doi:10.11896/j.issn.1002-137X.2016.08.009
Abstract PDF(1902KB) ( 483 )   
References | Related Articles | Metrics
The features of train control network based on ARCNET are analyzed in detail in the paper.Aiming at the specific problems of ARCNET protocol used in train control network,the paper proposed a new token transmission mechanism,named DATP(Double Address Token Passing).The nodes save two next node addresses instead of one node address used in traditional ARCNET.This mechanism can avoid the disadvantage of successor node being difficult to be found quickly when the current node leaving network.Analysis and simulation show that the new token transmission mechanism increases token transmission efficiency and node’s throughput while keeping the original advantage of ARCNET,which can improve the performance of the train control network.
Cross-layer Optimization Algorithm Based on Raptor Code for Video Multicast
QIAN Xiao-jie and WANG Chao
Computer Science. 2016, 43 (8): 50-54.  doi:10.11896/j.issn.1002-137X.2016.08.010
Abstract PDF(957KB) ( 376 )   
References | Related Articles | Metrics
In the fourth generation mobile communication,people pay much more attention to quality of service(QoS) of mobile broadband network transmission in video multicast based on multiple-input-multiple-output(MIMO) system.Video multicast cross-layer optimization algorithm based on Raptor code was proposed.By analyzing packet error rate between Raptor code and other erasure code,and selecting the appropriate Raptor rate and modulation and coding scheme (MCS) model in order to optimize the combination,the throughput of the system can be increased and the channel resource (time slot) utilization can also be improved.Simulation shows the algorithm has better throughput at least 28%,and saves the channel resources at least 18%.In addition,when SNR is 11.5,PSNR performance gain of BUS sequence can reach more than 4dB.
Distributed Relay Selection Algorithms for Cellular Networks
WU Hang, QIAN Li-ping and CHEN Qing-zhang
Computer Science. 2016, 43 (8): 55-59.  doi:10.11896/j.issn.1002-137X.2016.08.011
Abstract PDF(425KB) ( 286 )   
References | Related Articles | Metrics
This work aims at minimizing the total transport power subject to the outage probability requirement for the dual-hop decode-forward relay cellular networks.In particular,this paper first presented the closed-form optimal total power consumed by the user node and the cooperative relay node under the constraint of the outage probability.Then,a distributed auction algorithm based on Acknowledgement(DAA-ACK) and an improved distributed auction algorithm(IDAA) were proposed to assign every user to suitable relay node.Simulation results show that every user node can find its best cooperative relay through limited message passing with neighbor relay nodes.
Adaptive Migration Algorithm Choosing Framework in Live Migration of Multiple Virtual Machines
CUI Yong, LIN Yu-song, LIU Wei, GAO Shan and WANG Zong-min
Computer Science. 2016, 43 (8): 60-65.  doi:10.11896/j.issn.1002-137X.2016.08.012
Abstract PDF(503KB) ( 472 )   
References | Related Articles | Metrics
In IaaS cloud computing platform,live migration of multiple virtual machines plays a dominant role in the dynamic scheduling,optimization and management of IT resources.Although Pre-copy and Post-copy are the prevalent live migration algorithms for the single virtual machine,which both have the pros and cons,only one of them is monotonously adopted in the context of the gang of live migration.This scheme cannot choose the best migration algorithm for each virtual machine according to its overload,eventually degrading the whole migration performance.This paper proposed an adaptive live migration algorithm choosing framework,which uses fuzzy clustering method to classify the virtual machines to be migrated and migrates them with the chosen optimum migration algorithm.Experiment results show that the proposed framework can exert each advantage of the two basic migration algorithms and improve the whole live migration performance.
Packet Drop Detector Design Based on IEEE 802.11e in Wireless Sensor and Actuator Networks
YAO Yi, LIU Yong, SHEN Xuan-fan, LIAO Yong and ZHAO Ming
Computer Science. 2016, 43 (8): 66-70.  doi:10.11896/j.issn.1002-137X.2016.08.013
Abstract PDF(391KB) ( 265 )   
References | Related Articles | Metrics
In the current wireless sensor and actuator networks (WSANs),wireless sensor and its service types of transferring information tend to be diversified;at the same time,in the real-time industrial systems,packet drop under the wireless network environment will bring serious damage to the whole system.To improve the reliability of WSANs,this paper proposed an optimal design method for WSANs packet drop detector based on IEEE 802.11e.The method adopts IEEE 802.11e which provides quality of service (QoS) as data communication protocol,deduces WSANs packet drop probability matrix under the protocol,and uses the output of the Luenberger state observer based on the packet drop probability matrix as the decision threshold of packet drop,takes the packet drop in the network as a fault signal,thereby designs a WSANs packet drop detector.The packet drop detector mentioned in this paper not only can effectively determine whether there is a packet drop in the network,but also distinguish the cause of packet drop through the judgment of the fault signal waveform,that is,the sensor node fault,or random packet drop caused by unstable channel environment.Finally,the effectiveness of the proposed design is demonstrated by the results of MATLAB/OMNET++hybrid simulations.
Improved Non-data-aided SNR Estimation Algorithm
YANG Jun, JIANG Hong and ZHANG Qiu-yun
Computer Science. 2016, 43 (8): 71-73.  doi:10.11896/j.issn.1002-137X.2016.08.014
Abstract PDF(285KB) ( 316 )   
References | Related Articles | Metrics
To solve the problem of low signal-to-noise ratio (SNR) estimation performance of several existing Non-data-aided (NDA) estimation algorithms for quadrature phase shift keying (QPSK) signal,this paper proposed an improved low complexity SNR estimation algorithm to improve the estimation precision and estimation range.Firstly,the proposed algorithm divides the received signal into in-phase channel and orthogonal channel.Then,this algorithm takes full advantage of information of over-sampling rate and makes data statistic processing for the two channel signals in per over-sampling period respectively.Finally,the SNR value is estimated according to the calculated mean value and variance of each channel signal.Simulation results show that the improved algorithm has low estimation bias and mean-squared-error (MSE) when the SNR value is in the range of -15 to 30 dB,and thus has better estimation performance than otherexisting algorithms obviously.
Efficient Caching Mechanism Based on Soft Defined Information-centric Networks
LEI Fang-yuan, CAI Jun, LUO Jian-zhen, DAI Qing-yun and ZHAO Hui-min
Computer Science. 2016, 43 (8): 74-78.  doi:10.11896/j.issn.1002-137X.2016.08.015
Abstract PDF(513KB) ( 251 )   
References | Related Articles | Metrics
In-network caching is one of the most important features of ICN (information-centric networks).A caching strategy based on soft defined information-centric network was proposed to efficient utilize the whole network cache resource.In control plane,SDN controller aware the dynamic network status,the caching mechanism not only considers the importance of the cache community but also the node’s importance within a cache community,which makes different popularity of content objects more reasonable in temporal distribution.The SIC mechanism was implemented under a variety of experimental conditions,and compared with the previous strategies Hash-LFU and Betw+LRU.The simulation results show that the mechanism can yield a significant performance improvement,such as,average request delay,cache hit ratio,hop reduction ratio,average community,and the resource overhead of SDN controller also keep the low level.
Research of Passive OS Recognition Based on Decision Tree
YI Yun-hui, LIU Hai-feng and ZHU Zhen-xian
Computer Science. 2016, 43 (8): 79-83.  doi:10.11896/j.issn.1002-137X.2016.08.016
Abstract PDF(427KB) ( 577 )   
References | Related Articles | Metrics
As the problem of network treat is getting worse,it makes great sense to study the method of operation system recognition,which is a key part of network security evaluation.Current operation system recognition based on TCP/IP stack fingerprint database can not recognize unknown fingerprints.A passive operating system identification method based on decision tree was proposed,and it was compared with other classification algorithms.Experiment shows that this classification algorithm owns a better effectiveness and gives the explanation about the result.
Watermarking Algorithm Based on Zernike Moments and NSCT-SVD
CHEN Ying, ZHENG Hong-yuan and DING Qiu-lin
Computer Science. 2016, 43 (8): 84-88.  doi:10.11896/j.issn.1002-137X.2016.08.017
Abstract PDF(2147KB) ( 415 )   
References | Related Articles | Metrics
In this paper,we proposed a NSCT-SVD blind geometric robust watermarking algorithm by using the geometric correction to image that based on non-subsampled contourlet transform,sigular valve decomposition and Zernike moment,which combimes both the invariance of Zernike moment to attack and rotate,the fixation of invariant centorid relative place both before and after attack and insensitive characteristics to noise.In the algorithm,the host image extracts low-frequency region by NSCT decomposition,the watermarking is embedded with quantizing the Euclideam norms of every Sigular of Zernike moments and invariant centorid,then the watermarking is extracted after recovering the synchronization information.The experimental results show that the algorithm has better robustness for various processing such as noise,filtering,compression,and all kinds of geometric attack.
Impossible Differential Cryptanalysis of ESF
CHEN Yu-lei and WEI Hong-ru
Computer Science. 2016, 43 (8): 89-91.  doi:10.11896/j.issn.1002-137X.2016.08.018
Abstract PDF(289KB) ( 378 )   
References | Related Articles | Metrics
This paper studied and analyzed the ability of the block cipher algorithm ESF resisting the impossible diffe-rence,a 8-round impossible differential routz was used and the related results were given.On the basis of the 8-round impossible differential route,according to the relationship of the round keys,by changing the original order of round number extension and key guessing,the paper attacked 11-round ESF,improving the result of the 11-round ESF impossible differential.Computing result shows that the attack of 11-round ESF needs O(253) chosen plaintext operations and O(232) encrypting computations.At the same time,it also shows that the 11-round ESF is not immune to the impossible difference.
Improved Certificateless Proxy Blind Signature Scheme
LIU Er-gen, WANG Xia, ZHOU Hua-jing and GUO Hong-li
Computer Science. 2016, 43 (8): 92-94.  doi:10.11896/j.issn.1002-137X.2016.08.019
Abstract PDF(240KB) ( 405 )   
References | Related Articles | Metrics
Through the security analysis of the certificateless proxy blind signature scheme proposed by Zhang Xiao-min,it was found that the scheme can not resist the malicious-but-passive KGC attack and signature forgery attack,in addition,KGC collusion with the original signer can get proxy key.In order to avoid these attacks,an improved certificateless proxy blind signature scheme was presented,and it has a better security.
Similarity Measure Algorithm of Cipher-text Based on Re-extracted Keywords
LI Zhi-hua, CHEN Chao-qun, LI Cun, HU Zhen-yu and ZHANG Hua-wei
Computer Science. 2016, 43 (8): 95-99.  doi:10.11896/j.issn.1002-137X.2016.08.020
Abstract PDF(394KB) ( 272 )   
References | Related Articles | Metrics
To solve the similarity of dissimilarity measurement between the cipher texts,a new similarity measure algorithm of cipher-text based on re-extracted keywords called SMCTBRK was proposed.Through defining the new concepts of effective scope,relative scope,distributed scope of the keywords,and re-extracting the keywords in documents,the SMCTBRK constructs the encryption index item for the compared documents depending on the less amounts of re-extracted keywords.Here,the encryption index item is organized as the feature vector.Further,the SMCTBRK computes the similarity between the different cipher texts by the encryption index item instead of the separated keywords.Experiments on real documents were conducted.And the results show that the SMCTBRK is more promised than the Shingling algorithm and the Simhash algorithm on accuracy and recall ratio.
Image Encryption Scheme Based on Generalized Chaotic Synchronization Systems
CHAI Hong-yu and ZANG Hong-yan
Computer Science. 2016, 43 (8): 100-104.  doi:10.11896/j.issn.1002-137X.2016.08.021
Abstract PDF(1465KB) ( 346 )   
References | Related Articles | Metrics
Many people apply the low-dimension chaotic map to encryption,however,the high-dimension map is more complex and has more parameters.Firstly,a 4-dimensional generalized chaotic synchronization (GCS) system was constructed based on generalized chaotic synchronization theorem.Combining this system,we designed a digital image encryption scheme,which can successfully encrypt and decrypt color images without any distortion.Secondly,performance test and security analysis were performed by using key space analysis and the pixels distribution character,correlation coefficients,the key sensitivity test,information entropy and avalanche effect. The results of analysis and simulation show that the key space of the image encryption scheme is more than 10288 and it has strongly anti-attacking perfor-mance. Additionally,the scheme is extremely sensitive to the system’s parameters and initial conditions.Results suggest that the image scheme is effective in network communication.
Dynamic Trust Model Based on DS Evidence Theory under Cloud Computing Environment
SHU Jian and LIANG Chang-yong
Computer Science. 2016, 43 (8): 105-109.  doi:10.11896/j.issn.1002-137X.2016.08.022
Abstract PDF(409KB) ( 232 )   
References | Related Articles | Metrics
Cloud computing technology faces the new security problems of privacy disclosure and information lost.How to identify and manage the trust degree of nodes in cloud system becomes a important research area of cloud security.Then the trust model can provide a new idea to solve it.Considering that the architecture of cloud computing is a distributed system,combining with the DS evidence theory and trust mechanism,a dynamic trust model based on DS theory was built under the cloud environment.In the model, subjective and objective trust values are merged into a integrated trust-value to describe the safety and credibility of the cloud nodes.The problems of trust value initialization and updating,malicious nodes punishment and load balancing were discussed.Simulation experiment proves the effectiveness,balance and robustness of the model.Finally,the summary and the prospects for future research were put forward.
Study on Web-based Malware Detection Mechanism Based on Behavior and Semantic Analysis
LI Dao-feng, HUANG Fan-ling, LIU Shui-xiang and HUANG An-ni
Computer Science. 2016, 43 (8): 110-113.  doi:10.11896/j.issn.1002-137X.2016.08.023
Abstract PDF(901KB) ( 618 )   
References | Related Articles | Metrics
How to improve the detection efficiency of Web malicious code has always been a problem to be solved in the research of Web security issues.A detection mechanism for Web-based malware based on behavior and semantic analysis was proposed to detect vulnerabilities in XSS,ActiveX controls and Web Shellcode in our paper.Behavioral characteristics was extracted and the detection engine was built to realize the correct detection of vulnerabilities in XSS,ActiveX controls and Web Shellcode,and the forensics of Shellcode attacks.Experimental results show that the perfor-mance of the new detection mechanism for Web-based malware has stronger detection ability and lower missing rate.
Adaptive Post-type Steganography Algorithm Based on MP3
ZHANG Yao, PAN Feng, SHEN Jun-wei and LI Ning-bo
Computer Science. 2016, 43 (8): 114-117.  doi:10.11896/j.issn.1002-137X.2016.08.024
References | Related Articles | Metrics
Aiming at the problem that compressed domain audio steganography algorithm is very complex and has smaller embedding capacity,a kind of post-type algorithm based on MP3 was proposed.This algorithm selectes embedding position adaptively,uniformly and randomly according to the number of secret bits and the number of small-value area groups that use Hb code tables.The algorithm realizes embedding secret information through changing the last bit of carrier-codes.There is no need to decode but only to find the embedding positions and extract the last bit of carrier-codes for extracting secret information.In terms of capacity,the algorithm lifts exceeding three times than classical MP3stego algorithm.In terms of imperceptibility,the algorithm lifts 10% than code-mapping algorithm.In addition,the algorithm greatly improves the efficiency because there is no need to decode.The experimental results prove that the algorithm has the features of safety and timeliness,and can satisfy the need of secure communication.
Measurement of AS-level Internet Evolution Based on Temporal Distance
ZHANG Yan-qing, LU Yu-liang and YANG Guo-zheng
Computer Science. 2016, 43 (8): 118-122.  doi:10.11896/j.issn.1002-137X.2016.08.025
Abstract PDF(2023KB) ( 276 )   
References | Related Articles | Metrics
As the inter-domain routing security problem becomes increasingly prominent,the measurement of AS-level Internet variability has become a research hotspot.The existing measurement methods can not reflect the variability of AS-level Internet comprehensively,therefore,this paper introduced AS reachability distance (ASRD) and AS connecti-vity distance (ASCD) based on temporal distance to characterize the difference of AS reachability and connectivity at different time respectively.Dynamics of a specific AS can be measured by analyzing route information base of different time spans and time granularities.Experimental results show that the time series analysis of ASRD and ASCD can be used to not only detect AS-level network anomalous event accurately,but also reveal the evolution laws of AS-level Internet in a long term.
New Algorithm for Automatic Deriving Sufficient Conditions of SHA-1
HU Yun-shan, SHEN Yi, ZENG Guang and HAN Wen-bao
Computer Science. 2016, 43 (8): 123-127.  doi:10.11896/j.issn.1002-137X.2016.08.026
Abstract PDF(456KB) ( 315 )   
References | Related Articles | Metrics
Deriving sufficient conditions is one of the important technologies in the differential mode attacking.In this paper,turning the problem of deriving sufficient conditions into structure of linear equations in F2,using the judgment theorem of linear equations to determine the correctness of the sufficient conditions derived by each step,a new algorithm for automatic deriving sufficient conditions of SHA-1 hash function was proposed.This algorithm is equally applicable to derive sufficient conditions in SHA-0 which has similar structure with SHA-1 after appropriate deformation.
Improved RFID Authentication Protocol with Backward Privacy
LIU Dao-wei, LING Jie and YANG Xin
Computer Science. 2016, 43 (8): 128-130.  doi:10.11896/j.issn.1002-137X.2016.08.027
Abstract PDF(305KB) ( 283 )   
References | Related Articles | Metrics
In the application of Internet of things,aiming at the security flaws of existing RFID security authentication protocol and low authentication efficiency,this paper proposed a meet to RFID privacy after two-way authentication protocol,which uses Rabin operation of one-way encryption algorithm to solve the problem of synchronization and later to the privacy,and uses random number to keep the label information of freshness.BAN logic method is used to the formal agreement.Compared the security and cost of the proposed protocol with the existing security authentication protocols,the results show that this protocol not only has the anti tracking,anti brute force crack,and anti replay attack etc,but also because of the reduction of the gate circuit,reduces the cost of protocol,and is suitable for low-cost RFID system.
Semantic-based Access Control Approach for Software User
ZHENG Gao-shan, YING Shi and WU Rui
Computer Science. 2016, 43 (8): 131-136.  doi:10.11896/j.issn.1002-137X.2016.08.028
Abstract PDF(458KB) ( 284 )   
References | Related Articles | Metrics
The access control model widely used in the application software can’t dynamically change the resource access permissions according to the user-context.This paper proposed a semantic-based access control approach which rea-lizes the dynamic authorizing to users.Firstly,we proposed a user model and a resource model based on semantic information,built the foundation ontology for the user model and the resource model,then defined a set of semantic rules and inference rules,and designed a decision algorithm based on semantic reasoning process.The process of the approach is that receiving and analyzing access requests,obtaining the related user information from the ontology knowledge accor-ding to the semantic rules,and invoking the decision algorithm to generate the final recommended results.Finally,a case of comprehensive disaster reduction system was studied to validate the effectiveness of the approach.
Research of Runtime Verification Based on Live Sequence Chart
YE Jun-min, ZHANG Kun, YE Zhu-jun, CHEN Pan and CHEN Shu
Computer Science. 2016, 43 (8): 137-141.  doi:10.11896/j.issn.1002-137X.2016.08.029
Abstract PDF(1109KB) ( 281 )   
References | Related Articles | Metrics
Runtime verification is a lightweight formal verification method.Using visual requirements description language to model requirements specification scene is a hotspot in runtime verification.For the problems that existing verification methods based on live sequence chart easily generate redundant properties,verification overhead is quite large,two true value semantics verification result is not accurate and the efficiency of existing verification algorithm of rewriting logic based on Maude is low,this paper proposed an improved runtime verification method based on live sequence chart to support existing runtime verification technologies.Experiments show that the result of improved method in this paper is accurate and verification overhead is low.
Algorithm for Top-K Keyword Query in Data Streams
ZHENG Shi-min, QIN Xiao-lin, LIU Liang and ZHOU Qian
Computer Science. 2016, 43 (8): 142-147.  doi:10.11896/j.issn.1002-137X.2016.08.030
Abstract PDF(1663KB) ( 793 )   
References | Related Articles | Metrics
Distributed Top-K keyword query based on the framework of Spark Streaming is a hot research issue.It is used to count all keywords in data streams.Most studies of Top-K keyword query limit storage space and assume that the keywords set is known.To solve this problem,we presented a distributed Top-K keyword query algorithm which can be used in cases where the keywords set is unknown.This algorithm dynamically adjusts the size of storage space according to monitored keywords and optimizes the updated strategy to improve precision.Experimental results show that the proposed algorithm under the condition of unknown keywords set has better performance.
Segmentation Coding Scheme Based on Simple Regenerating Codes
WANG Jing, LUO Wei, OUYANG Ming-sheng, JIANG Can and WANG Xin-mei
Computer Science. 2016, 43 (8): 148-153.  doi:10.11896/j.issn.1002-137X.2016.08.031
Abstract PDF(449KB) ( 259 )   
References | Related Articles | Metrics
Combining RS erasure codes with simple XOR operation,simple regenerating codes can realize single failed node repair on the basis of tolerating any n-k node failure.In this paper,based on the repair process of simple regenera-ting codes,we proposed a new segmentation coding scheme.Concretely,we divided the f coded blocks with the same index into two sections,and then XOR operation was used on the coded blocks of each section to generate a new parity block.Performance analysis and simulation results show that,at a little expense of storage overhead,the proposed segmentation coding scheme based on simple regeneration code can optimize the repair bandwidth and disk I/O overhead,which proves the correctness and effectiveness of this scheme.
Approach to Modeling Software Evolution Process for Synchronous Interaction
QIAN Ye, LI Tong, YU Yong, SUN Ji-hong, YU Qian and PENG Lin
Computer Science. 2016, 43 (8): 154-158.  doi:10.11896/j.issn.1002-137X.2016.08.032
Abstract PDF(374KB) ( 279 )   
References | Related Articles | Metrics
In the background of globalization software development,frequency and complexity of interactive collaborative development among software development teams are higher and higher.In order to improve the quality of software by controlling and regulating the behavior of the software evolution development,EPMM was designed in paper [10].However,the software evolution process model which is defined by the EPMM fails to formally describe the characteri-stics of synchronous interaction.In this paper,based on four levels(global,process,activity and task) in the software evolution process defined by EPMM,CEPMM was designed.Because it is in activity level that software evolution process model which is defined by CEPMM can describe synchronous interaction of it,an approach to modeling software evolution process in activity level was put forward based on CCS.At last,the activity modeling visualization tool CAmodel of software evolution process was built in visual studio platform.Not only concurrency,iteration and so on,but also synchronous interaction of the software evolution process can be described by model defined by CEPMM,which lay the foundation for analyzing and reasoning mathematically.
Hidden Markov Software Reliability Model with EM Method
ZHANG Ting-ting, ZHANG De-ping and LIU Guo-qiang
Computer Science. 2016, 43 (8): 159-164.  doi:10.11896/j.issn.1002-137X.2016.08.033
Abstract PDF(447KB) ( 291 )   
References | Related Articles | Metrics
In view of the problem that single software reliability model doesn’t precisely describe the failure behavior of the software,and doesn’t accurately predict the software reliability,this paper studied a hidden Markov chain software reliability model incorporating the change point analysis.The formulation of the hidden Markov chain software reliability prediction approach involves a hidden state variable that indicates the regime change.This variable is specified to be detected by software failure data in each regime.The model parameters are estimated using expectation/maximization (EM) algorithm.Some numerical examples were performed based on some real software failure data sets.Experimental results show that the proposed framework to incorporate multiple change points for software reliability model has fairly accurate and efficient change-point detection capability,and can significantly improve software reliability fitting accuracy.
Personalized Recommendation Algorithm Based on Similar Cloud and Measurement by Multifactor
SUN Guang-ming, WANG Shuo and LI Wei-sheng
Computer Science. 2016, 43 (8): 165-170.  doi:10.11896/j.issn.1002-137X.2016.08.034
Abstract PDF(594KB) ( 267 )   
References | Related Articles | Metrics
Aiming at the problems of sparse data,occasionality that caused by attribute’s strict matching and measuring with single rating in similarity computation,this paper presented a personalized recommendation algorithm based on similar cloud and measurement by multifactor.The algorithm defines a marking cloud classified by item to fill sparse matrix,and puts forward a feature vector of item’s interest degree consisting of item category,mean score,frequency of rating and accessing,which contributes to calculating the item’s similarity with cloud model.On this basis,this algorithm predicts item’s score in different categories according to the nearest neighbors,and gets the item’s finally score based on a new weighted average computing method.Experimental results show that the algorithm produces more accurate neighbors and better recommendation quality.
Promoted Traffic Control Strategy Based on Q Learning and Dynamic Weight
ZHANG Chen, YU Jian and HE Liang-hua
Computer Science. 2016, 43 (8): 171-176.  doi:10.11896/j.issn.1002-137X.2016.08.035
Abstract PDF(1885KB) ( 1204 )   
References | Related Articles | Metrics
Q-Learning is widely used in traffic signal control.In traditional multi-agent traffic signal control policy,agents gain intersection information via network,and make the best control decision.It works well in most cases.But traditional policy has a weakness that the global reward is calculated by simple average.This may cause local block in some cases.This paper introduced a promoted area traffic signal control based on Q learning.“Intersection Weight” is used in the new calculation method,which varies dynamically according to the real traffic condition.Both traditional and promoted methods were used to experiment.The results show the advantage of the promoted one.
Research on Optimal Support Vector Classifier Model Integrating Feature Selection
ZHAO Yu, CHEN Rui and LIU Wei
Computer Science. 2016, 43 (8): 177-182.  doi:10.11896/j.issn.1002-137X.2016.08.036
Abstract PDF(565KB) ( 270 )   
References | Related Articles | Metrics
Considering taking the feature selection process into the support vector machine classifier,a new model called feature selection in semi-definite program for support vector machine(FS-SDP-SVM) was proposed in this paper for integrating the target of feature selection and machine classifier.The key to this model is to split the kernel space into several subspace by each feature.With the linear combination of these subspaces,the new kernel matrix was constructed and optimized with the support vector classifier by semi-definite programing.Two parameters for the feature choosing are announced,namely feature supporter and feature contributor,which can be flexibly adjusted for the need of maximizing accurate rate (FS-SDP-SVM1) or minimizing feature quantity (FS-SDP-SVM2).The empirical study analyzed the difference between two model types and other feature selection algorithms Relief-F,SFS and SBS on the UCI machine learning data and man-made data.Results show that FS-SDP-SVM can achieve maximum accurate rate or minimum feature quantity in majority of UCI data in consistent with the good ability of generalization.This method precisely gets rid of the noise data and preserves the real features in man-made data test.
Method for Ensuring Safety of Robot Based on Behavior Detection
GE Bin-bin and MAO Xin-jun
Computer Science. 2016, 43 (8): 183-189.  doi:10.11896/j.issn.1002-137X.2016.08.037
Abstract PDF(1774KB) ( 342 )   
References | Related Articles | Metrics
Robot is a typical cyber-physical system and its behaviors that are driven and controlled by software are highly related with safety.In order to ensure the safety of robot behaviors,this paper proposed a security protection method based on behavior detection that enables robot to detect its behaviors’ safety before their executions.Three kinds of safety issues were considered,including hardware constraints,stability and collision confliction.The corresponding algorithm called ACD algorithm for collision confliction detection was proposed.Implementation framework for the method was proposed on the basis of AOP technology.We made some experiments with NAO robots.The results indicate that the method can effectively detect harmful behaviors and adjust motions in compensation for behaviors.
Multiclass Text Classification by Golden Selection and Support Vector Domain Description
WU De, LIU San-yang and LIANG Jin-jin
Computer Science. 2016, 43 (8): 190-193.  doi:10.11896/j.issn.1002-137X.2016.08.038
Abstract PDF(324KB) ( 261 )   
References | Related Articles | Metrics
Traditional multiclass text classification methods have disadvantages such as large computation and long training time.An algorithm based on golden selection and support vector domain description (SVDD) was proposed for text classification.The proposed method utilizes TF-IDF formula to compute the relative word frequency for each entry,sorts them in descending order and normalizes the text vector.Then golden selection method is introduced for dimension reduction,where the number of redundant sample features is no more than one.Finally,SVDD is applied for classification,which assigns the test text to the class with the smallest value of the relative class distance.Numerical experiments on various datasets demonstrate that,the proposed method has better robustness,higher classification accuracy and less training time,compared with “one-against-one” and “one-against-all” support vector machine.It is more appropriate for huge text multi-classification problems.
EM Parameter Learning Algorithm of Bayesian Network Based on Cloud Model
CAO Ru-sheng, NI Shi-hong, ZHANG Peng and XI Xian-yang
Computer Science. 2016, 43 (8): 194-198.  doi:10.11896/j.issn.1002-137X.2016.08.039
Abstract PDF(373KB) ( 324 )   
References | Related Articles | Metrics
The conception of node which has been discrete is fuzzy and random in Bayesian network.To solve this pro-blem,a learning algorithm based on cloud model and EM was proposed.Firstly,the measurable specimen is transformed into qualitative conception through strategy of heuristic gaussian cloud transformation and cloud generator.The conception of certainty is recorded and changed to probability with formula.Then,specimen is expended and the conception is confirmed based on the probability.Finally,parameter is optimized on the basis of EM algorithm.The simulation experimental results show that the effect of parameter learning is more correspond to actual fact after cloud discretization.Precision of parameter learning and accuracy of reasoning are improved.
Multi-objective Dynamic Priority Request Scheduling Based on Reward-driven Request Classification
CHEN Mei-mei
Computer Science. 2016, 43 (8): 199-203.  doi:10.11896/j.issn.1002-137X.2016.08.040
Abstract PDF(474KB) ( 266 )   
References | Related Articles | Metrics
The general target of request scheduling is to maximize throughput and minimize response time under the condition that existing system resource is in full use.But for a busy business Web system with the goal of revenue gene-ration,it is crucial to increase the completion rate of transaction related requests and requests from VIP.Aiming at the multiple objectives of request scheduling for business Web server,the multi-dimension criterion for reward-driven request classification was firstly presented.Then,based on the definitions of request priority and scheduling priority,the algorithm of multi-objectives dynamic priority scheduling was proposed,which can provide DiffServ and QoS guarantee adaptively for business Web system under the variety workload.At the same time,the dynamic scheduling mechanism was introduced based on one-step-ahead overload estimation instead of the workload measurement to avoid the control delay.Simulation experiment shows the validity of this scheduling mechanism and algorithm.Through the comparison of the completion rate of transaction requests as well as the average response time with that of the traditional method FCFS,MODP proves its preferential principle under not only lower workload but also overload condition.
New Continuously Differentiable Filled Function with One Parameter
CAI Zhen-zhen and YE Zhong-quan
Computer Science. 2016, 43 (8): 204-206.  doi:10.11896/j.issn.1002-137X.2016.08.041
Abstract PDF(191KB) ( 236 )   
References | Related Articles | Metrics
The filled function method is an effective approach to solve nonlinear global optimization problems.A new filled function with one parameter was proposed when the objective function has some certain conditions for unconstrained optimization problems,which is continuously differentiable.Then,theoretical properties of the filled function were investigated.At last,the paper gave several numerical experiments.The results show that the filled function is effective and the algorithm is feasible.
Path Prediction and Query Algorithm Based on Probability
GAO Fa-qin
Computer Science. 2016, 43 (8): 207-211.  doi:10.11896/j.issn.1002-137X.2016.08.042
Abstract PDF(377KB) ( 766 )   
References | Related Articles | Metrics
This paper studied the path prediction and query technology in road network and proposed the optimal path prediction algorithm based on statistics and probability theory.In practical applications,the road network is complex.We proposed the concept of possible path set,and designed an algorithm to extract sub-network of road network to be involved in the path prediction,which can help to reduce the network scale and the complexity of path prediction.In the spacial road network environment,the existing position prediction technologies of mobile object are mainly used for short-term forecasting,and they can not predict the traffic situation of the next road intersection.In order to overcome this defect and reduce location updating rate in terminal devices,this paper designed a simple mobility model of road network to extract mobile statistical features from a large number of historical mobile pathes,which can capture the stee-ring mode in road intersection.Based on this mobile model,a traffic forecasting model with high accuracy was proposed to predict the moving path of the object.
Some Fusion Algorithms Used in Localization of Gas Leakage Source
DENG Zhen-wen, SUN Qi-yuan, JIA Yun-wei and YUAN Zhi-qian
Computer Science. 2016, 43 (8): 212-215.  doi:10.11896/j.issn.1002-137X.2016.08.043
Abstract PDF(328KB) ( 221 )   
References | Related Articles | Metrics
Multi-source information fusion algorithms were used in the searching of gas leakage source in this paper.In order to improve the efficiency,gas sensors and vision sensor were mixed to get information,single gas sensor measurement was replaced by the multiple gas sensors,and the measurement method was changed from single-point to multipoint.In each data fusion period,some suitable algorithms are applied to determine the searching direction of the robot.The result indicates that the weighted average method used to fuse the data of gas sensors with same kind can reduce the influence of the noises and other troubles,the least square method which can optimally estimate unknown parameters can originally estimate the location and flow of the leakage,and the Bayesian inference can accumulate different information source to judge the leakage,leading the robot to search the target more reasonable.
Incremental Algorithm for Constructing Fuzzy Concept Lattices Based on Attributes Decrement
WANG Li-ming, JIANG Qin and ZHANG Zhuo
Computer Science. 2016, 43 (8): 216-222.  doi:10.11896/j.issn.1002-137X.2016.08.044
Abstract PDF(545KB) ( 228 )   
References | Related Articles | Metrics
Constructing fuzzy lattices directly is always with an exponential time complexity currently.With the increase of the set of truth values L,the size of fuzzy concept lattice becomes larger and larger.This paper proposed an algorithm called FMBUAD.A new fuzzy concept lattice can be gotten by removing the membership degree of deleting attributes which become redundant and useless in original concepts,without reconstructing the fuzzy concept lattice from scratch and considering the set of truth values.We proved the correctness of FMBUAD based on the theoretical basis of fuzzy concept lattices.In order to update the original fuzzy concept lattice with the deleting attributes,FMBUAD removes the membership degree of deleting attributes of all concepts firstly.Secondly,it visits the fuzzy lattice to identify all of the nodes which have to be deleted.Finally,the algorithm has to deal with the partial order between the deleted nodes’s parents and children.The theory and experimental results show that FMBUAD has excellent performance for saving time.
Group Partition in Topic-related Microblogging Spreading Based on Probability Generation Model
CHEN Jing, LIU Yan and WANG Xu-zhong
Computer Science. 2016, 43 (8): 223-228.  doi:10.11896/j.issn.1002-137X.2016.08.045
Abstract PDF(1844KB) ( 248 )   
References | Related Articles | Metrics
Event can spread rapidly in the form of topic microblog and make enormous influence.Therefore,the analysis for the users and discovering groups with different interesting and sentiments in the topic discussion obtain the concern of the government and enterprises.The generated content and relationship between the users are often separated in the current methods on community detection,which have no semantic information.Though some methods have combined the two factors,they fail to take account of the behavior information and sentiment information which exist in microblog,and they are not well to mine the groups in the microblog topic discussion.We proposed a group partition model called BP-STG which takes the text information,social contacts,text sentiment information and the users’ behavior into consideration.We presented a Gibbs sampling implementation for inference of our model,mining only different interest groups,but also the sentiment distribution and participants’ activeness and behavior information in a group.Besides,our model can be extended to many texts associated with a group of people such as E-mails and forum posts.Experimental results on actual dataset show that BP-STG model can offer an effective solution to group partition in topic-related microblogging spreading and provide more meaningful semantic information than the state-of-the-art model.
Research on Weak Relation Social Network and Weak Relation Strengthening Based on Data Mining
PAN Shu-yin and GAO Jian-ling
Computer Science. 2016, 43 (8): 229-232.  doi:10.11896/j.issn.1002-137X.2016.08.046
Abstract PDF(358KB) ( 403 )   
References | Related Articles | Metrics
Sina microblog is a representative of the social network which is weakly related.Aiming at the relevance among user behaviors of weak relational social network,this paper analyzed them in clustering and correlation in detail, and made certain assumptions at the same time.Then we randomly selected a number of topic microblog from different aspects,and got different number of sample points.Based on these topic microblog,this paper carried out research and analysis.The results prove the rationality of the hypothesis,and the specific conditions of weak relation getting stronger are found afterwards.
GRASP Algorithm with Parameter Selection Mechanism for Heterogeneous Fleet School Bus Routing Problem
HOU Yan-e, DANG Lan-xue, KONG Yun-feng and XIE Yi
Computer Science. 2016, 43 (8): 233-239.  doi:10.11896/j.issn.1002-137X.2016.08.047
Abstract PDF(596KB) ( 249 )   
References | Related Articles | Metrics
This paper dealt with the heterogeneous fleet school bus routing problem,in which students are served by a heterogeneous fleet of buses with various capacities,fixed and variable costs.A GRASP (greedy randomized adaptive search procedure) algorithm with probability selection mechanism was proposed to solve the problem.In the initial solution construction phase,a set of threshold parameters with selection possibility is designed to control the size of restric-ted candidate list (RCL).The threshold parameter is selected by roulette-wheel selection method.The initial solution is then improved by variable neighborhood search (VNS).The selection probabilities for threshold parameters are equally set at first and periodically updated when certain iterations are finished.Each threshold parameter is evaluated by its historical performance records and its selection probability value is adjusted according to its performance.This parameter selection mechanism aims to find better average solution gradually.Our experiments are executed on a set of benchmark instances with different bus types.The results confirm that the proposed GRASP algorithm is more effective compared with classical GRASP algorithm.In addition,the proposed algorithm gives better results than existing algorithms for heterogeneous fleet school bus routing problem.
Improved GSO Algorithm for Solving PFSP
ZHANG Li-hong and YU Shi-ming
Computer Science. 2016, 43 (8): 240-243.  doi:10.11896/j.issn.1002-137X.2016.08.048
Abstract PDF(390KB) ( 242 )   
References | Related Articles | Metrics
An improved discrete glowworm swarm optimization (DGSO) was proposed for solving the permutation flow shop scheduling problem (PFSP) with the objective of minimizing makespan.On the basis of the traditional glowworm swarm optimization algorithm,this paper used a random key coding method based on the ascending order to discretize the glowworm swarm populations.Then it used the NEH algorithm to initialize the glowworm swarm populations,the idea of crossover and mutation thought in genetic algorithm was used to improve the location updating strategy,and individual mutation was used to resolve individual problems in isolation,improving the capability of algorithm optimization.Finally,the simulation test of the improved algorithm was carried out through typical examples.Simulation results show its efficiency and superiority for solving the PFSP.It is obviously better than the traditional glowworm swarm optimization and genetic algorithm,and it is an effective algorithm for solving flow shop scheduling problem.
Multi-scale Clustering Mining Algorithm
HAN Yu-hui, ZHAO Shu-liang, LIU Meng-meng, LUO Yan and DING Ya-fei
Computer Science. 2016, 43 (8): 244-248.  doi:10.11896/j.issn.1002-137X.2016.08.049
Abstract PDF(349KB) ( 544 )   
References | Related Articles | Metrics
Data mining field has made some progress on the multi-scale research.However,the current research mostly focuses on the multi-scale mining of the space or image data.And traditional clustering mining has not separately stu-died the multi-scale characteristic of datasets.According to existing problems,this paper carried on the general study of multi-scale clustering mining theories and methods.Firstly,we extended scale definition on the basis of the concept hierar-chy and built multi-scale datasets.Secondly,we expounded the reasons and classification of scale conversion,meanwhile concluded the definition of the multi-scale clustering.Then,we introduced multi-scale clustering scaling up algorithm and multi-scale clustering scaling down algorithm based on the kriging theories.Finally,simulation experiments tested MSCSUA and MSCSDA with the help of public UCI clustering datasets and demographic dataset from H province.And the experimental results show that MSCSUA and MSCSDA are effective and feasible.
Deep Belief Networks Research Based on Maximum Information Coefficient
ZENG An and ZHENG Qi-mi
Computer Science. 2016, 43 (8): 249-253.  doi:10.11896/j.issn.1002-137X.2016.08.050
Abstract PDF(398KB) ( 241 )   
References | Related Articles | Metrics
The traditional deep belief networks use reconstruction error as the evaluation criteria of restricted boltzmann machine(RBM) networks in the training process,which can reflect the likelihood between RBM network and training samples to some extent.However,it is not reliable.Maximum information coefficient (MIC),based on the estimations of Shannon entropy and conditional entropy,identifies interesting relationships between pairs of variables in large data sets and captures a subset of highly related features.The MIC can be used as a criterion for evaluating a network since it is robust to outliers.In order to construct models that fit data well and reduce classification error,a deep belief networks based on MIC method was proposed.MIC is used not only in dimensionality reduction,but also in improving the unreliability of the reconstruction error.Classification experiments were performed on handwriting data sets of MNIST and USPS by several traditional methods and deep belief networks based on MIC method.The results show that the latter can improve the recognition rate effectively.
Research on Adaptive Genetic Algorithm in Application of Focused Crawler Search Strategy
JING Wen-peng, WANG Yu-jian and DONG Wei-wei
Computer Science. 2016, 43 (8): 254-257.  doi:10.11896/j.issn.1002-137X.2016.08.051
Abstract PDF(329KB) ( 352 )   
References | Related Articles | Metrics
How to design the crawler search strategy to improve the crawler’s coverage and accuracy has become a hot research point in the focused crawler.Mostly crawler uses best-first search algorithm.Based on the focused crawler which uses this search strategy will easily plunge into local optimum,we combined genetic algorithm with focused crawler search strategy.We set dynamic fitness function and genetic-operators to make the crawlers have certain adaptive searching adaptability.By comparing with those crawlers which use the other search strategy or which combine with traditional genetic algorithm search strategy,the experimental results show that this algorithm can partly improve the crawler search ability.
Anomaly Detection Algorithm Based on Improved K-means Clustering
ZUO Jin and CHEN Ze-mao
Computer Science. 2016, 43 (8): 258-261.  doi:10.11896/j.issn.1002-137X.2016.08.052
Abstract PDF(319KB) ( 1431 )   
References | Related Articles | Metrics
After optimizing random selection process of the initial cluster centers,an anomaly detection algorithm based on improved K-means clustering was proposed.When the cluster centers are selected,the tightness of all data points is calculated, outliers region is removed,and then the K initial centers in dense regions of data are selected,which avoids that the random selection is easy to cause the defect of local optimum.By optimizing the selection process,the initial cluster centers are more closer to the real clusters centers before iteration of the algorithm,the numbers of iterations are reduced,and the quality of clustering and anomaly detection rate are improved.Experiments show that the improved algorithm is much better than the original algorithm in clustering performance and anomaly detection.
Particle Swarm Algorithm for Multi-objective Optimization Based on Intuitionistic Fuzzy Entropy
SU Ding-wei, ZHOU Chuang-ming and WANG Yi
Computer Science. 2016, 43 (8): 262-266.  doi:10.11896/j.issn.1002-137X.2016.08.053
Abstract PDF(364KB) ( 224 )   
References | Related Articles | Metrics
A particle swarm algorithm for multi-objective optimization problems based on intuitionistic fuzzy entropy was proposed to overcome the deficiency that the performance of algorithm’s convergence and distribution is not high.Firstly,the algorithm uses a metric based on intuitionistic fuzzy entropy to measure the diversity of the population in the case of multi-objective space.Then,three strategies,namely dynamic changes of inertia weight,use of the external archive and mutation operator mechanism based on intuitionistic fuzzy entropy,was designed and intuitionistic fuzzy multi-objective programming model was built to enhance the extent of the algorithm’s exploration,increasing the diversity of the evolving population and prevent premature convergence.At last,results of simulation indicate that the proposed algorithm has good performance of convergence and distribution,and it is useful for dealing with multi-objective optimization problems.
Fuzzy c-means and Adaptive PSO Based Fuzzy Clustering Algorithm
GENG Zong-ke, WANG Chang-bin and ZHANG Zhen-guo
Computer Science. 2016, 43 (8): 267-272.  doi:10.11896/j.issn.1002-137X.2016.08.054
Abstract PDF(424KB) ( 227 )   
References | Related Articles | Metrics
The existing PSO fuzzy clustering algorithms need to set the PSO parameters and converge very slowly,a fuzzy c-means and adaptive PSO based fuzzy clustering algorithm was proposed for that problem.Firstly,the fuzzy c-means algorithm is used to generate the initial solution,leading to a more directed search process.Then,the improved adaptive PSO is used to train and optimize the dataset,and the PSO parameters are adjusted adaptively in the training process to achieve a better optimal result. Lastly,the fuzzy c-means algorithm is used for fuzzy clustering.Compared experiments results show that the proposed method improves computational speed greatly and achieve good clustering performance.
Research on Coupling Model of Text Classification
SUN Jin-guang and QUAN Wen-jing
Computer Science. 2016, 43 (8): 273-276.  doi:10.11896/j.issn.1002-137X.2016.08.055
Abstract PDF(309KB) ( 226 )   
References | Related Articles | Metrics
In this paper,we discussed the main depth neural network classifier feature and the extraction method of text feature.First of all,the coupling relationships between text features and in text features are analyzed to build textual features for classification and establish the coupling relation model of text feature.Secondly,the text feature is classified as the depth of the neural network input layer.Finally,the network is trained without supervision,increasing distinct nodes on the top floor to achieve the function of text classification.
Fast Feature Extraction Algorithm Based on Manifold Learning and Sparsity Constraints
REN Ying-chun, WANG Zhi-cheng, CHEN Yu-fei, ZHAO Wei-dong and PENG Lei
Computer Science. 2016, 43 (8): 277-281.  doi:10.11896/j.issn.1002-137X.2016.08.056
Abstract PDF(1057KB) ( 268 )   
References | Related Articles | Metrics
Aiming at the problems of being unsupervised and time-consuming of L1 norm optimization in the existing sparsity preserving projection,by integrating the sparse representation information with the manifold structure of the data,a novel algorithm for fast feature extraction,named sparsity preserving discriminative learning (SPDL),was proposed.SPDL first creates a concatenated dictionary by class-wise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least square method.Secondly,a local between-class separability function is defined to characterize the scatter of the samples in different sub-manifolds.Then SPDL integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function.Finally,the proposed method is transformed into a problem of solving the generalized eigenvalue.Extensive experimental results on several public face databases demonstrate the effectiveness of the proposed approach.
Fatigue Recognition Based on Spare Representation
NIU Geng-tian, WANG Chang-ming and MENG Hong-bo
Computer Science. 2016, 43 (8): 282-285.  doi:10.11896/j.issn.1002-137X.2016.08.057
Abstract PDF(1056KB) ( 237 )   
References | Related Articles | Metrics
In order to solve the traffic safety problems caused by fatigue driving,a method based on sparse representation was proposed to detect the fatigue through face image.In this method,first,Gabor wavelets are used to extract multi-scale and multi-orientation features.At the same time,considering the execution efficiency of the algorithm,2D-PCA is used to reduce the dimension of features.Finally,based on sparse representation theory,the over-complete dictionary of fatigue is constructed and fatigue is identified.The proposed method was tested on the self-built database.Experimental results show the effectiveness of the proposed method,and the fatigue recognition rate reaches 94.5%.
Face Recognition Based on Improved Adaptive Locality Preserving Projection
MEI Ling-ling and GONG Qu
Computer Science. 2016, 43 (8): 286-291.  doi:10.11896/j.issn.1002-137X.2016.08.058
Abstract PDF(427KB) ( 330 )   
References | Related Articles | Metrics
Locality preserving projections (LPP) aims to preserve local structure of the data by constructing a nearest-neighbor graph.In the construction process of nearest-neighbor graph,LPP will encounter the difficulty of the selection of two parameters K and σ.The construction of nearest-neighbor graph plays an important role in recognition effect,so selection of the two parameters can affect the discrimination ability of LPP.In order to avoid the effects of the selection of parameters on recognition rate,an face recognition algorithm based on improved adaptive locality preserving projection was proposed.Firstly,a parameter-free graph construction strategy is designed,which can adaptively choose neighbors of each sample point and determine corresponding edge weights.Then,because of the high dimensionality problem in the matrix calculation process,QR decomposition is used to reduce dimension.Finally,conjugate orthogonalization is used to reduce the statistical correlation between feature vectors and improve the recognition rate by ensuring that the projection axis has statistical uncorrelation.The experimental results on ORL database show that the new algorithm is better than the LPP algorithm,DLPP algorithm,and LMMC algorithm in terms of recognition rate.
Multiple Density Leaf Reconstruction Based on Limited Details
ZENG Lan-ling, ZHANG Wei, YANG Yang and ZHAN Yong-zhao
Computer Science. 2016, 43 (8): 292-296.  doi:10.11896/j.issn.1002-137X.2016.08.059
Abstract PDF(2624KB) ( 247 )   
References | Related Articles | Metrics
According to the rough point cloud encountering the problem of noise and edged bonding in the process of plants modeling,we presented a multiple density point cloud reconstruction algorithm based on limited details.Firstly,we used the color information to spare the original points cloud extracted by Kinect,and then separate the adhesive parts to gain ideal point cloud.Simultaneously,a new method named multiple density with limited details reconstructive algorithm are provided on the basis of limitation of human recognition.The algorithm is different from traditional grid reconstruction,it generates the fuzzy surface by refining the density of points in order to reach the purpose of surface.Finally,the experiments show that the result and speed of the new algorithm are better than mesh reconstruction in a certain extent.
New Method of Robust Voiceprint Feature Extraction and Fusion
LUO Yuan and SUN Long
Computer Science. 2016, 43 (8): 297-299.  doi:10.11896/j.issn.1002-137X.2016.08.060
Abstract PDF(309KB) ( 575 )   
References | Related Articles | Metrics
In order to promote the robustness of speaker verification system in noise circumstance,this paper improved MFCC based on auditory periphery model,finished the fusion of improved MFCC and PLPC according to the inter-cluster distinctness,obtained a new voiceprint feature and tested its robustness in noise circumstance.An experiment based on different SNR between fused feature and original feature was finished.The experimental results show that the fused feature can effectively increase the voiceprint recognition compared to MFCC and PLPC by 2.2% and 3.1% in simulated restaurant noise.
Gait Recognition Method Based on Modified Gait Energy Image and View Detection
LI Jing, ZHANG Jing and NI Jun
Computer Science. 2016, 43 (8): 300-303.  doi:10.11896/j.issn.1002-137X.2016.08.061
Abstract PDF(1306KB) ( 337 )   
References | Related Articles | Metrics
For solving the problem that the variety of view,clothing and carried objects influence the performance of gait recognition,a gait recognition method based on modified gait energy image and view detection was proposed.Firstly,the method modifies the gait energy image,to reduce the impacts from variety of clothing and carried objects.Then,it extracts entropy features from modified gait energy image,and detects the view of unknown gait sequence according to nearest neighbor criterion.Finally,it extracts gait features by combining two-dimensional principal component analysis and two-dimensional linear discriminannt analysis,and executes classification by using nearest neighbor criterion,to reduce the impacts from view variety.Experiments implemented on CASIA B dataset show that,the new method has a high average recognition rate,and is robust to the variety of view,clothing and carried objects.
Super Resolution Algorithm Based on Sub-pixel Block Matching and Dictionary Learning
XU Yu-ming, SONG Jia-wei and XIAO Xian-jian
Computer Science. 2016, 43 (8): 304-308.  doi:10.11896/j.issn.1002-137X.2016.08.062
Abstract PDF(1610KB) ( 391 )   
References | Related Articles | Metrics
For algorithms based on dictionary learning have low computational efficiency and are limited to single frame image reconstruction,a super resolution algorithm based on sub-pixel block matching and dictionary learning was introduced to realize the reconstruction of multiple-frame images.Firstly,the image registration is realized by sub-pixel block matching,then the low-resolution dictionary is constructed according to the result of registration to reduce the amount of calculation,and the patches for reconstruction are selected by calculating the degree of similarity between object image and auxiliary image.The algorithm is proved to achieve better reconstruction when used in still and motion image on MATLAB platform.
Improved Canny Edge Detection Algorithm
LING Feng-cai, KANG Mu and LIN Xiao
Computer Science. 2016, 43 (8): 309-312.  doi:10.11896/j.issn.1002-137X.2016.08.063
Abstract PDF(1517KB) ( 962 )   
References | Related Articles | Metrics
Canny operator is an operator with optimum ideas.It has higher detection accuracy and is widely used.Since this operator is easily affected by noise,it is essential to filter out the noise by Gaussian filter.This paper analyzed the defects of original non-maximum suppression judging condition,and proposed a new judging condition,and designed a new gradient detection operator that could effectively suppress noise.The data and experimental results analysis indicate the effectiveness of the improved algorithm.
Aerial Video Image Stabilization Based on Affine Invariant Constraint and Fast EKF Adaptive Filter
YI Meng and CHU Yan
Computer Science. 2016, 43 (8): 313-317.  doi:10.11896/j.issn.1002-137X.2016.08.064
Abstract PDF(2913KB) ( 288 )   
References | Related Articles | Metrics
In view of serious jitter of aerial airborne imaging,different accuracies of obtained matching points in aerial video stabilization,and requirements of fast and precise aerial image stabilization technology,an image stabilization algorithm was proposed combining affine invariant constraint and Extend Kalman Filter (EKF) adaptive filter.Firstly,corners are detected from reference frame as feature points,the locally stable points are selected by the Harris detector.Then Delaunay triangulation is used to find initial matching,and the most accurate matching points which best satisfy the affine invariant constraint are filtered.Finally,EKF motion filter method is used to estimate and correct the statistical property of noise,so jitter of the camera can be removed but scanning motion can be retained simultaneously.As to a large number of simulation experiment of aerial images with a resolution of 640 pixel ×480 pixel,accurate model can be estimated by affine invariant constraints,the rapid motion compensation method takes 5.054 ms in the process of compensation,and saves for 69.5% than traditional motion compensation method.The experimental results illustrate that the proposed algorithm can stabilize the inter-frame jitter and track the real scene effectively.
Implementation for Compressed Sensing Reconstruction Algorithm Based on GPU
ZHANG Jing, XIONG Cheng-yi and GAO Zhi-rong
Computer Science. 2016, 43 (8): 318-322.  doi:10.11896/j.issn.1002-137X.2016.08.065
Abstract PDF(417KB) ( 478 )   
References | Related Articles | Metrics
Aiming at the need of real-time application of large scale compressed sensing reconstruction algorithm,the acceleration method and implementation of the Orthogonal Matching Pursuit (OMP) algorithm based on Graphic Proces-sing Unit (GPU) was discussed.In order to reduce the high latency of transmission between the central processing unit and GPU,the iterative process of the whole OMP algorithm is transferred to the GPU for parallel execution.According to the access characteristics of global memory,the CUDA program is improved in graphic processing unit,which makes the storage access meet the combined access conditions,and reduces the access delay.At the same time,according to the resource conditions of the Streaming Multiprocessor (SM),the allocation of shared memory is increased in SM.In addition,the bank conflict is reduced by improving the threads access algorithm to increase the memory access speed.Tests on the NVIDIA Tesla K20Xm GPU and Intel (R) E5-2650 CPU show that the time consuming projection module and the updated weight module can get 32 and 46 times of speedup ratio respectively,and the whole algorithm can achieve 34 times of speedup.