Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 44 Issue 5, 13 November 2018
Review on Main Melody Extraction from Pop Music
LI Wei, FENG Xiang-yi, WU Yi-ming and ZHANG Xu-long
Computer Science. 2017, 44 (5): 1-5.  doi:10.11896/j.issn.1002-137X.2017.05.001
Abstract PDF(494KB) ( 1366 )   
References | Related Articles | Metrics
Melody is one of the most important elements of music,with many direct and indirect applications in music content analysis,music creation,music education and protection of music intellectual properties.Main melody extraction aims to produce a sequence of frequency values corresponding to the pitches of the dominant melody from a musical recording.Due to the complexity and specificity of pop music,main melody extraction from pop music turns out to be highly challenging.This paper reviewed research works of main melody extraction,classified and summarized the methodsused for main melody extraction.Finally,evaluation methodologies for main melody extraction were presented.
Overview of Secure Top-k Query Processing in Two-tiered Wireless Sensor Networks
DAI Hua, YE Qing-qun, YANG Geng, XIAO Fu and HE Rui-liang
Computer Science. 2017, 44 (5): 6-13.  doi:10.11896/j.issn.1002-137X.2017.05.002
Abstract PDF(830KB) ( 571 )   
References | Related Articles | Metrics
The research of secure data query in wireless sensor networks has attracted more and more attentions,and it is important to research secure Top-k query in two-tiered wireless sensor networks,in which storage nodes is intermedia-te layer of networks.The present research about secure Top-k queries in two-tiered sensor networks is mainly concentrated on solutions of data privacy preservation and query result verification.This paper surveyed the current state of the art of secure Top-k query techniques in two-tiered sensor network according to two dimensions,which are security and communication performance,and introduced the network model,query model and security questions in query process.This paper summarized the key techniques of current protocols and the main advantages and disadvantages of these protocols.At last,this paper pointed out possible research direction in the future.
Auto-parallelization Research Based on Branch Nested Loops
DING Li-li, LI Yan-bing, ZHANG Su-ping, WANG Peng-xiang and ZHANG Qing-hua
Computer Science. 2017, 44 (5): 14-19.  doi:10.11896/j.issn.1002-137X.2017.05.003
Abstract PDF(609KB) ( 1160 )   
References | Related Articles | Metrics
GCC compiler is an open source compiler system which has won favour among many researchers,however,it is only able to analyze the dependence of perfect nested loop.In order to efficiently explore the granularity parallelism of nested loop,we deeply analyzed the data dependence of GCC5.1 and put forward a dependence testing method of handling branch nested loop.At first,the branch nested loop is recognized.Then,the relationship between array index and index variable of outer loop is identfied.At last,the distance vector of outer loop is computed,and whether the loop has carried dependence or not is decided by examining the distance vector.The experimental results show that the proposed method can effectively recognize the dependence of branch nested loop.
MapReduce Based Similarity Self-join Algorithm for Big Dataset
SUN De-cai and WANG Xiao-xia
Computer Science. 2017, 44 (5): 20-25.  doi:10.11896/j.issn.1002-137X.2017.05.004
Abstract PDF(629KB) ( 756 )   
References | Related Articles | Metrics
How to find out duplicates/similarities in dataset is a key issue in big data processing.Similarity join is a va-lid operation for finding similarities,and similarity join algorithm based on MapReduce has attracted serious concern for the advantage of processing big dataset.In this paper,similarity self-join algorithms were researched and some factors which slow self-join were discovered.To accelerate self-join,an improved similarity self-join algorithm based on Massjoin was proposed.In filtration stage,new filtration criterion is added to eliminating self-join redundant pairs.In verification stage,the techniques of backward-forward pairs and combined id are adopted to eliminate more self-join redundant candidate pairs,and the dataset is scanned only once in reading original strings.The experimental results demonstrate that both filtration CPU time and the verification CPU time of new algorithm decrease.As a result,the efficiency of similarity self-join algorithm is increased by using our revision strategies.
Optimization of 3D Finite Difference Algorithm on Intel MIC
HAO Xin and GUO Shao-zhong
Computer Science. 2017, 44 (5): 26-32.  doi:10.11896/j.issn.1002-137X.2017.05.005
Abstract PDF(583KB) ( 553 )   
References | Related Articles | Metrics
Finite difference algorithm is a numerical discrete method based on the partial differential equation which is widely applied in elastic wave propagation simulation.Because of the high computation density,long distance memory access pattern and low CPU utilization,it becomes the performance bottleneck in practical applications.Aiming at solving above problems,this paper deliberated the key points of 3D finite difference(3DFD) algorithm and then proposed the three-step progressive method to optimize 3DFD algorithm based on Intel MIC.Firstly,the basic optimization methods,such as branch elimination,loop unroll,and invariant extraction,were proposed to reduce calculation strength and remove the obstacle of SIMD(Single Instruction Multiple Data).Secondly,by leveraging the parallel optimization methods such as data dependence analysis,loop tiling,and intrinsic SIMD instructions,it took full advantage of the mechanism of MIC coprocessor with multithreads and long vector.At last,the heterogeneous cooperative optimization methods,such as data transformation minimization and load balancing,were applied to the platform of CPU+MIC(Many Integrated Cores) which parallelizes the algorithm execution in both CPU and MIC.Experimental results show that the optimized 3DFD algorithm gains 50~120 speedup compared with original algorithm.
Blind Estimation of Seismic Wavelet on Unit Hypersphere with Mean Shift
XIAO Yun-shi, ZHAO Yan-qing and CHENG Cheng
Computer Science. 2017, 44 (5): 33-36.  doi:10.11896/j.issn.1002-137X.2017.05.006
Abstract PDF(1012KB) ( 469 )   
References | Related Articles | Metrics
Seismic data can be described by convolution of seismic wavelet and reflectivity sequence in seismic exploration.Seismic deconvolution is essentially a blind process due to lack of priori knowledge.Wavelet estimation by BICA has multiple solutions and the solutions conform to unit norm constraint.In this paper,a wavelet space model was established on unit hypersphere and its riemannian metric and gradient were studied,then a Mean Shift clustering algorithm on unit hypersphere was presented and an average wavelet can be estimated by the clustering result.Results of synthetic data and real data show that,compared with the BICA method,estimated wavelet has higher fidelity and similarity to the designed wavelet,and real data have higher resolution after deconvolution.
Trust and Weight Based Data Fusion Model in Wireless Sensor Networks
ZHANG Feng, ZHENG Hong-yuan and DING Qiu-lin
Computer Science. 2017, 44 (5): 37-41.  doi:10.11896/j.issn.1002-137X.2017.05.007
Abstract PDF(515KB) ( 594 )   
References | Related Articles | Metrics
In wireless sensor networks,the reliability of data fusion is a very important issue.It has been widely concerned.Thus,a trust and weight based data fusion model (TWDFM) was proposed in this paper.In this model,sensor nodes elect reliable cluster head by building trust tables,and cluster head detects abnormal nodes according to the weights and fuses the credible data.The results of simulation show that this model can effectively improve the safety and accuracy of data fusion.
Application of Atomic Decomposition Algorithm Based on Sparse Representation in AR Model Parameters Estimation
JIANG Yu-jie, LIU Guo-qing and WANG Tian-jing
Computer Science. 2017, 44 (5): 42-47.  doi:10.11896/j.issn.1002-137X.2017.05.008
Abstract PDF(452KB) ( 537 )   
References | Related Articles | Metrics
Aiming at the problem of AR model order and parameters estimation,a novel algorithm based on sparse representation of atomic decomposition was proposed.Firstly,an over-completed sparse dictionary was constructed according to the characteristic of the autocorrelation coefficient of AR model.Secondly,for noisy signals,this paper used the slack variables to establish a new optimization model for sparsely recovery of the characteristic polynomial roots of AR model.Finally,we converted the parameters estimation problem into the problem of best basis selection which is solved by the modified affine scaling methodology.The experiments show that our algorithm is more effective than the traditional methods in terms of the forecasting precision and robustness.
Suboptimal Dynamic Resource Allocation Algorithm in OFDM Based Cognitive Radio Network
HAN Jie, SONG Xiao-qin, DONG Li and JIN hui
Computer Science. 2017, 44 (5): 48-52.  doi:10.11896/j.issn.1002-137X.2017.05.009
Abstract PDF(407KB) ( 606 )   
References | Related Articles | Metrics
This paper investigated the multi-user resource allocation in OFDM-based cognitive radio(CR-OFDM) including subcarrier allocation and power allocation.In CR system,not only the interference between primary uers(PUs) and second users(SUs) is considered,but also the interference caused by SUs need to be controlled under the threshold.Thus the system model is more complicated.Because of the integer constraints,the complexity of the algorithm which can obtain optimal solution is too high to suit for real-time systems.Therefore,this paper proposed a distributed algorithm to obtain the suboptimal solution with low complexity.First,a subcarrier allocation algorithm considering power constraint and interference constraint was proposed and then a modified linear water-filling algorithm was put forward to allocate power.Simulation results show that the proposed algorithm can obtain the satisfactory system capacity and reduce the complexity in comparison with the optimal Lagrange dual method,which is more suitable for real-time systems.
High-throughput MAC Protocol for Industrial Convergent Wireless Networks
XU Yi-da and XU Chao-nong
Computer Science. 2017, 44 (5): 53-60.  doi:10.11896/j.issn.1002-137X.2017.05.010
Abstract PDF(2393KB) ( 494 )   
References | Related Articles | Metrics
Convergent topology is a widely used topology in industrial wireless sensor networks,and its throughput is adversely influenced by the hidden terminal problem.Nowadays,channel reservation scheme is utilized to deal with the notorious hidden terminal problem.However,the increasing data rate in the physical layer introduces relatively short packets,which results in the relative increase of the overhead of the channel reservation scheme.Since IEEE 802.11 DCF cannot effectively deal with the two problems mentioned above simultaneously,we proposed a novel protocol,termed as APCSMA,for convergent topology.Specifically,the channel reservation scheme is eliminated to deal with the relatively short packets,and an opportunistic transmission scheme is employed to deal with the hidden terminal problem.A necessary condition has been developed for the optimal transmission probability to maximize the per-node throughput.Together with the assumption of uniformly distributed nodes,the optimal transmission probability is derived under the primary interference model.Then,the performances of our protocol are thoroughly investigated in terms of network throughput,delay and energy efficiency.Our analytical results verified by simulations and the results demonstrate the efficiency of the proposed protocol in practical industrial convergent wireless networks.
Dynamic Path Planning Design of Mobile Sink for Single Region Bursty Traffic
WAN Cheng, CHANG Jie and ZHANG Ling
Computer Science. 2017, 44 (5): 61-65.  doi:10.11896/j.issn.1002-137X.2017.05.011
Abstract PDF(448KB) ( 539 )   
References | Related Articles | Metrics
In a wireless sensor network which sensor nodes collect the whole network data periodically,the bursty data traffic need to be sent to base station correctly in short time if a region generates some.At the same time,we must take the regular data of other regions into consideration.This paper proposed a dynamic path planning for mobile Sink algorithm.First at all,the network is divided into square virtual grids and each grid is a cluster,dividing every singal node to a grid and selecting the cluster head.Secondly,a mobile Sink is used to collect data through the shortest path which is established by related TSP algorithm.Finally,mobile Sink will change the path dynamically to collect data when there is a bursty data traffic in any region.After lots of simulation in NS-2 simulation platform,it shows that the proposed algorithm can change the path of mobile Sink dynamically to collect the bursty data traffic correctly as soon as possible,ba-lance accuracy,real time of bursty data traffic and packet loss rate,delay of periodic reginal data flow,and prolong the network lifetime as well.
Improved EEMD Algorithm Based on EEG Signal
HUANG Li-ya, DA Cheng-lu, YANG Chen, CHEN Zhi-yang and WANG Hao
Computer Science. 2017, 44 (5): 66-70.  doi:10.11896/j.issn.1002-137X.2017.05.012
Abstract PDF(471KB) ( 565 )   
References | Related Articles | Metrics
In order to effectively improve the mode mixing problem for EEG study,a modified EEMD algorithm was proposed to adapt to the brain signal research.Firstly,We screened out the EMD results based on correlation,then adaptively predicted the EEG signal characteristics from the original brain signals,and fused the property of white Gaussian noise to generate new noise brain signal.Finally,based on the new noise,the EEMD was performed.The experimental results show that the new brain signal is not only adaptive,but also can solve the mode mixing problem of brain signals in EMD better,proving the theory and the application value of the improved algorithm in EEG study field.
Research on Maximum Mutual Information Optimization in GSM Networks with Precoding
WEI Lin-jing, NING Lu-lu, LIAN Zhi-chao, DAI Yong-qiang and WANG Lian-guo
Computer Science. 2017, 44 (5): 71-74.  doi:10.11896/j.issn.1002-137X.2017.05.013
Abstract PDF(344KB) ( 511 )   
References | Related Articles | Metrics
In order to improve the the performance of generalized spatial modulation (GSM) mutual information,a new precoding scheme based on the ellipsoid algorithm was proposed.First,in order to optimize the preencoder with mutual information,the analytical expression of GSM mutual information under finite character input is derived.In the process of maximizing the mutual information of GSM,the GMS system is transformed into a virtual multi-input multi-output (MIMO) system in order to solve the non-convex coupling problem of joint precoding design.Then,under the condition of considering all the sub-channel power constraints,the extended ellipsoid algorithm is used.The experimental results show that the proposed precoding scheme greatly improves the performance of GSM mutual information.
Concept and Architecture of Trustworthy Super Network for Mobile Internet of Things
WANG Hui-qiang, WEN Xiu-xiu, LV Hong-wu, FENG Guang-sheng and LIN Jun-yu
Computer Science. 2017, 44 (5): 75-80.  doi:10.11896/j.issn.1002-137X.2017.05.014
Abstract PDF(585KB) ( 629 )   
References | Related Articles | Metrics
Ubiquitous services are urgent demand of users.Mobile Internet of things is an important part for realizing ubiquitous services,and it suffers from the inhomogeneous distribution of hotspots,the imperfect coverage of spectrums and unstable network links,so the trustworthiness of its services is threatened.Meanwhile,the existing frameworks of mobile Internet of things lack the integration of ubiquitous services guarantee and trustworthiness guarantee.In this paper,a new concept “trustworthy super network” was proposed,which builds virtual networks or logical networks on the basis of traditional networks.Furthermore,according to the concept,a trustworthy super network framework was proposed for mobile Internet of things.The framework aims for trustworthy assurance,and takes ubiquitous service providing as the core ability,so that trustworthiness can be guaranteed when ubiquitous services are provided in mobile environments.
Research on Application of Differential Privacy in Collaborative Filtering Algorithms
XIAN Zheng-zheng, LI Qi-liang, LI Gai and LI Lei
Computer Science. 2017, 44 (5): 81-88.  doi:10.11896/j.issn.1002-137X.2017.05.015
Abstract PDF(815KB) ( 978 )   
References | Related Articles | Metrics
Today,the problem of personal privacy inferred by attacker using some background knowledge has become the problems which the Internet users are more worried about.Differential privacy is defined very strictly and can be proved,and it is the most effective privacy protection technology to solve this problem at present.Berlioz et al[1] proposed to apply differential privacy into matrix factorization which is the one of the popular collaborative filtering me-thods.Several new algorisms were proposed by Berlioz et al,but they lacked the strict proof processes.In this paper,we firstly added the prove processes of these algorisms.And then the objective function with added noise method proposed by Chaudhuri was used into the objective function of ALS.In addition,a selection scheme of differential privacy was gi-ven.Finally,some experimental results on two real datasets show that our approach obtains better recommendation accuracy while protecting the personal privacy in the raw data.
Hazard Identification Algorithm Based on Deep Extreme Learning Machine
LI Shi-yao, ZHOU Liang and LIU Hu
Computer Science. 2017, 44 (5): 89-94.  doi:10.11896/j.issn.1002-137X.2017.05.016
Abstract PDF(512KB) ( 947 )   
References | Related Articles | Metrics
Hazard identification plays an important role in aviation safety management.The results of hazard identification must be highly accurate to ensure the safety of the flight.To this end,a hazard identification algorithm based on deep extreme learning machine (HIELM) was proposed,which consists of multiple deep stacked ELMs and a single ELM.There are a parallel structure and different number of hidden nodes among these deep ELMs,and the hazard information is gotten to produce deep features according to the hazard area.In addition,the way of generating input weights has been enhanced with recognition features.The single ELM receives the results as its input.With the help of the improved back propagation algorithm,the network can achieve much better accuracy.The thought that deep ELMs are trained respectively alleviates the memory pressure and the over fitting phenomenon when facing the high-dimensional datasets.
Virtual-user-based Public Auditing Integrity in Cloud Storage
XU Yun-yun, BAI Guang-wei, SHEN Hang and HUANG Zhong-ping
Computer Science. 2017, 44 (5): 95-99.  doi:10.11896/j.issn.1002-137X.2017.05.017
Abstract PDF(465KB) ( 575 )   
References | Related Articles | Metrics
A public auditing integrity mechanism based on the virtual user was proposed,addressing collusion issue between the revoked user and cloud.After the user is revoked from the group,the manager lets the proxy resign the blocks with the virtual user’s signature,which utilizes proxy re-signature to protect user’s privacy.In addition,the manager verifies the user who want to access the shared data via a local list consisting of all users’ identity,with objective of auditing data integrity and protecting user’s privacy.Theoretical analysis shows that our framework achieves significant performance improvement in security and privacy,and it can decrease the probability of an adversary to get the users’ identity privacy and the data in the cloud.
Collaborative Protection Architecture Design Orient to Fusion Ubiquitous Network
QI Yong, MO Xuan and LI Qian-mu
Computer Science. 2017, 44 (5): 100-104.  doi:10.11896/j.issn.1002-137X.2017.05.018
Abstract PDF(545KB) ( 439 )   
References | Related Articles | Metrics
With the in-depth analysis on the functions and features of fusion ubiquitous network,the hardware system for collaborative protection described in this paper was implemented by adding two kinds of function entities namely fusion security access gateway and virtual reconstruction security control server (security control server).Meanwhile,the software logical system was implemented by policy subscription.Additionally,an evidence projection decomposition method was used on evidence combination,which provides a security situation analysis method.Thus,in fusion ubiquitous network,various peripheral networks could use existing heterogeneous access network to access the security ser-vice platform in the IP core network by security access gateway.Meanwhile,the command and data of security service can be sent to peripheral nodes in the other direction.
Co-residency Detection Scheme Based on Cache Load and Real Time Noise Ascertainment in Cloud
HE Pei-cong, HUANG Ru-wei, CHEN Ning-jiang, ZHAO Bo-wen and LIU Yang
Computer Science. 2017, 44 (5): 105-110.  doi:10.11896/j.issn.1002-137X.2017.05.019
Abstract PDF(571KB) ( 621 )   
References | Related Articles | Metrics
Cloud computing has the advantages of convenient use, designing customized service on need base,optimizing resource utilization etc.It has become the main computing model for outsourcing services.The side channel attack of virtual machines in the cloud environment is one of the main potential threats of cloud computing,and the co-residency is the premise of the side channel attack in the cloud environment.In view of how to carry out the co-residency detection in multi tenant cloud environment,this paper presented the measurement of cache load by Prime-Probe with linked struct (MCLPPLS) and real time noise ascertainment mechanism(RTNAM).Based on MCLPPLS and RTNAM,we proposed a new method for the analysis of the co-residency detection.The experimental results show that the method can reduce the interference of the burst noise to the co-residency detection,and has higher true detection rate and lower detection time,which shows good performance.
Multi-attribute Decision Making Model Based on Attribute Circle and Application of Reliability Analysis
CUI Tie-jun, LI Sha-sha and WANG Lai-gui
Computer Science. 2017, 44 (5): 111-115.  doi:10.11896/j.issn.1002-137X.2017.05.020
Abstract PDF(412KB) ( 461 )   
References | Related Articles | Metrics
In order to make the cloud model can be easily and conveniently used to express and calculate the problem of multi-attribute decision,and adapt to the range data that the expert gives,this paper proposed an improved method to transform the attribute circle,and then calculated the characteristic parameters of the cloud model by the attribut circle.The definition,property and drawing process of the attribute circle were given,and how to calculate the characteristic parameters of the cloud model of a decision level by using attribute circle was given,including defining the decision system,attribute normalization,attribute circle feature and property,calculation method of attribute circle area and cloud model calculation based on attribute circle.Through an example,the process and application of cloud model were given.According to the data processing and expert analysis,the acceptable decision level of the cloud model was obtained for the reliability risk of the system.And then this paper made the decision,which can be used to make the reliability level of multiple attribute decision in the actual system.
Research on Application of Outlier Mining Based on Hybrid Clustering Algorithm in Anomaly Detection
YIN Na and ZHANG Lin
Computer Science. 2017, 44 (5): 116-119.  doi:10.11896/j.issn.1002-137X.2017.05.021
Abstract PDF(424KB) ( 575 )   
References | Related Articles | Metrics
In order to improve the detection rate of anomaly detection system,reduce the false alarm rate,and solve the problems existing in the current anomaly detection,outlier mining techniques were applied to anomaly detection,and this paper presented a network anomaly detection method based on hybrid clustering algorithm (NADHC).In the method,the clustering algorithm based on distance is combined with the density clustering algorithm to form a new hybrid clustering algorithm.The method is based on the k-medoids algorithm to find out the cluster centers.Next,NADHC removes a small amount of attack behavior samples which has obvious characteristics of high concealment,then calculates the abnormal degree by the repeated increasing samples combined with density-based clustering method to determine the abnormal behavior.NADHC algorithm was validated on KDD CUP 99 dataset.The experimental results show its feasibility and effectiveness.
Privacy Preserving Based on Taxonomy Tree for Dynamic Set-valued Data Publishing
SHI Xiu-jin and HU Yan-ling
Computer Science. 2017, 44 (5): 120-124.  doi:10.11896/j.issn.1002-137X.2017.05.022
Abstract PDF(508KB) ( 541 )   
References | Related Articles | Metrics
Differential privacy protection method based on taxonomy tree is effective for static set-valued of data protection,but it doesn’t have corresponding protection method for dynamic set-valued data.So,this paper presented a classification tree based analysis of differential protection of privacy under dynamic set-valued data released.Firstly,according to the complete set structure relation matrix of the data set,this algorithm chooses the most closely related item set,and constructs the taxonomy tree.And then a boundary value is set to limit the incremental data update,and the new record is added to the root node of the taxonomy tree,in accordance with the initial taxonomy tree distribution method iteratively assigns each record.Finally,according to the Laplace mechanism,noise is added into leaf node to ensure that the algorithm satisfies differential privacy requirements.Compared with the existing algorithms,this algorithm optimizes the taxonomy tree,so that the release of data taxonomy tree model is established with small leaf nodes,reducing the noise added.Experiment with two real datasets shows that the algorithm is effective and performs better than existing algorithms.
Framework for Big Data Network Security Situational Awareness and Threat Warning Based on Open Source Toolset
JU An-kang, GUO Yuan-bo and ZHU Tai-ming
Computer Science. 2017, 44 (5): 125-131.  doi:10.11896/j.issn.1002-137X.2017.05.023
Abstract PDF(682KB) ( 1188 )   
References | Related Articles | Metrics
Big data is a double-edged sword for information system security protection.On the one hand,data value density decreased because of the dramatic increase in the amount of information,which provides a better shelter for attacks like APT.On the other hand,its processing technology in aggregation,mining and analysis of huge amounts of data makes it possible to identify security threats accurately.In order to strengthen the perceiving threat ability of information system,it is imperative to build a big data threat analyzing platform.Based on open source big data components,we proposed a situational awareness and threat warning platform for data collection and reduction,data storage,off-line analysis,real-time correlation,threat warning and situation awareness.Compared with existing platforms,this architecture has the advantages of high availability, scalability,and it is easy to deploy and is suitable for introducing threat intelligence.
Protocol State Based Fuzzing Method for Industrial Control Protocols
ZHANG Ya-feng, HONG Zheng, WU Li-fa, ZHOU Zhen-ji and SUN He
Computer Science. 2017, 44 (5): 132-140.  doi:10.11896/j.issn.1002-137X.2017.05.024
Abstract PDF(1416KB) ( 1920 )   
References | Related Articles | Metrics
Traditional fuzzing methods for industrial control system(ICS) have shortcomings of small coverage,low effectiveness and limitation of fault monitoring in fuzzing.This paper proposed a protocol state machine based fuzzing method for ICS protocols.Firstly,it describes the protocol state machine model with XML scripts,and designs a protocol state based test sequences generating method (PSTSGM) to achieve higher coverage rate during fuzzing process.Then,it puts forward a heart-beat based detecting and locating method for faults (HDLMF).It aims to detect embedded equipment behavior faults and locate the abnormal cases via the way of heart-beat detection and loop location.On the basis of the proposed method,we designed and implemented a fuzzing tool SCADA-Fuzz,and performed tests on a power control SCADA system.Experimental results show that SCADA-Fuzz can effectively and efficiently trigger behavior faults and locate security vulnerabilities.
Hybrid Protocol Deformation Based Web Security Fuzzy Testing and Utility Evaluation Approach
TU Ling, MA Yue, CHENG Cheng and ZHOU Yan-hui
Computer Science. 2017, 44 (5): 141-145.  doi:10.11896/j.issn.1002-137X.2017.05.025
Abstract PDF(421KB) ( 538 )   
References | Related Articles | Metrics
In the Web application security fuzzy testing,there are some problems such as low coverage of test cases,in-effective verification of test cases utilities and lack of quantitative evaluation of vulnerability detection results.In this paper,we proposed a method of generating dynamic features combination and protocol deformation test cases for typical Web security vulnerabilities.The rules of input feature combination and protocol deformation rules are devised,and the algorithm based on pollution propagation strategy and effectiveness validation method are established.Experiments show that the proposed method enhances the diversity and coverage of test cases,and reduces the false negative rate and false positive rate of vulnerability detection in the complex situation of web filtering environment.
Fuzzy Multi-keyword Retrieval Scheme over Encrypted Data in Cloud Computing
HE Heng, XIA Wei, ZHANG Ji, JIN Yu and LI Peng
Computer Science. 2017, 44 (5): 146-152.  doi:10.11896/j.issn.1002-137X.2017.05.026
Abstract PDF(609KB) ( 599 )   
References | Related Articles | Metrics
Nowadays more and more enterprise and individual users store a large amount of data in the cloud.To protect data privacy,important data has to be encrypted before being stored in the cloud,which brings a severe challenge to the retrieval of data.Traditional retrieval schemes based on plaintext have not been applicable,and existing retrieval schemes based on ciphertext have many shortcomings,some of which do not support fuzzy search or multi-keyword search, poor efficiency or large space overhead,or do not return ranking results.Therefore,researching secure and efficient retrieval scheme on ciphertext is of great significance.A new fuzzy multi-keyword retrieval scheme over encrypted data in cloud computing was proposed,which can retrieve the ciphertext containing multiple keywords from cloud ser-ver,support fuzzy keyword search,and will not leak any plaintext information of data and retrieval to cloud server and other attackers.In the scheme,counting bloom filter and MinHash algorithm are used to construct index vectors and query vectors,which makes the process of building index and querying more efficient,and the ranking results more accurate.The security analysis and performance evaluation show that our scheme has high security,reliability,retrieval efficiency and accuracy.
Maritime Security Research Based on Stackelberg Game
GONG Xu-fu, WEI Cheng-jian, QIAN Zhen, CHE Bao-zhen and SHEN Hang
Computer Science. 2017, 44 (5): 153-159.  doi:10.11896/j.issn.1002-137X.2017.05.027
Abstract PDF(575KB) ( 771 )   
References | Related Articles | Metrics
In recent years,the security problem is becoming more and more popular all over the world.How to use the limited security resources to maximize the deployment of defensive strategies to protect critical facilities and goals is a critical challenge that a lot of security department have to face.For maritime safety patrol,security model based on Stackelberg game was proposed to carry out the security resource scheduling.In the case of limited security resources,the constraints of time and space appear in real world and factors that human behavior are not completely rational were comprehensive considered,and the security game’s assumption that attack is rational perfectly was relaxed.Based on the theory of quantal response equilibrium,the attacker’s behavior preference was considered,then the optimal strategy under the condition of imperfect rationality and the optimal strategy under the condition of complete rationality were compared and analyzed.Experimental results show that the new model under the condition of imperfect rationality can get higher returns in real-world problems,and can be more effectively used in maritime security patrol.
Publicly Accountable Ciphertext-policy Attribute-based Encryption Scheme
MA Xiao-xiao and YU Gang
Computer Science. 2017, 44 (5): 160-165.  doi:10.11896/j.issn.1002-137X.2017.05.028
Abstract PDF(506KB) ( 734 )   
References | Related Articles | Metrics
Ciphertext-policy attribute-based encryption (ABE) enables fine-grained access control of decryption privilege by using the matching relation between the attribute set and the access structure,and is a promising one-to-many encryption primitive which has a bright application prospect in cloud computing,big data etc.However,an attribute set may be owned by many users in ABE, i.e. one decryption key may belong to many users.Thus,malicious users dare to leak their decryption privileges to others for profits.Furthermore,a semi-trust authority may illegally generate decryption keys to unauthorized users.To solve these two kinds of key abuses in ABE,we proposed a publicly accountable ciphertext-policy attribute-based encryption scheme by embedding both signatures of user and authority into the secret key.The proposed scheme can achieve traceability and accountability,in which anybody can trace the identity of a leaked decryption key,and an auditor can verify whether the leaked key is shared by a malicious user or is illegally generated by a semi-trust authority.At last,the security of the proposed scheme can be proved based on the security of its atomic encryption and signature schemes.
Segmented Fusion Fuzzy Clustering Algorithm for Cloud Data Security Storage
SHAN Dong-hong, SHI Yong-chang, ZHAO Wei-ting and ZHANG Jing-pu
Computer Science. 2017, 44 (5): 166-169.  doi:10.11896/j.issn.1002-137X.2017.05.029
Abstract PDF(1097KB) ( 473 )   
References | Related Articles | Metrics
In order to improve the safety performance of cloud data storage,the collection of attribute clustering data need to be optimized.Since the traditional method which uses fuzzy C means clustering classification of cloud data stora-ge design is sensitive to initial clustering center and is easy to fall into the local convergence,a method of constructing the cloud data security storage model was proposed based on segmentation fusion and fuzzy clustering.The data structure analysis of distribution grid structure model is given to build cloud data security storage and decomposition of vector quantization characteristics of cloud data attributes,cloud storage data on mass flow uses piecewise matching feature detection method to realize adaptive compression,redundant data collection and mining are realized,high order spectrum of cloud data stream is mined.Based on the fuzzy C means clustering algorithm,the data clustering fuzzy clustering is used to improve the security of data storage and reduce the load of cloud data storage.The simulation results show that the proposed method can reduce the error rate of data clustering and improve the throughput of data storage,and ensure the security of data storage.
Class of Permutation Polynomials over Finite Fields
WEI Qing and SUN Guang-hong
Computer Science. 2017, 44 (5): 170-171.  doi:10.11896/j.issn.1002-137X.2017.05.030
Abstract PDF(202KB) ( 582 )   
References | Related Articles | Metrics
Permutation polynomials over finite fields have been applied in wild areas of science and engineering,especially in the modern communication technology,cryptography and so on.Based on paper [23],it has been proved that when t is any even integer,the form (xpk-x+δ)t+γx+βTr(x) is a class of permutation polynomials over Fpn.Our work proved that whenever t is any even or odd integer,the form (xpk+1-xp+δ)t+γx+βTr(x)is permutation polynomials over Fpn.
Optimization on Distributed Stream Data Loading and Querying
YI Jia, XUE Chen and WANG Shu-peng
Computer Science. 2017, 44 (5): 172-177.  doi:10.11896/j.issn.1002-137X.2017.05.031
Abstract PDF(545KB) ( 607 )   
References | Related Articles | Metrics
Distributed stream query is a kind of real-time query computation method based on data stream,which has been widely concerned and developed rapidly in recent years.This paper summarized the research results of the distributed stream processing framework in real-time relational query.There is an in-depth comparison of some products,including the distributed data loading framework,distributed stream computing framework and distributed stream query systems.The paper proposed a distributed stream query model based on Spark Streaming and Apache Kafka,and designed a fast data loading technology based on virtual memory file system,which gets the data loading speed one time faster compare to Apache Flume.On the basis of Spark Streaming,a distributed stream query interface based on Spark SQL was realized,and a method for parsing SQL queries was proposed to implement distributed query in data stream.The experiment results demonstrate that,in the case of complex SQL queries,the method of analyzing SQL by writing code by oneself has obvious advantages.
Dynamic Load Balance Algorithm for Big-data Distributed Storage
ZHANG Li-zong, CUI Yuan, LUO Guang-chun, CHEN Ai-guo, LU Guo-ming and WANG Xiao-xue
Computer Science. 2017, 44 (5): 178-183.  doi:10.11896/j.issn.1002-137X.2017.05.032
Abstract PDF(522KB) ( 1014 )   
References | Related Articles | Metrics
Distributed storage is the major approach for handling the “Big Data”.Currently,the major technology is hadoop distributed file system (HDFS),which has been beset by the issues of scalability and write latency.In official 2.0 version,a new feature‘HDFS Federation’ addresses this limitation by adding support for multiple NameNodes/name spaces to HDFS.However,it does not take the isomerism of NameNode into account,and still lacks of dynamic load balance ability.Consequently,a dynamic load balance algorithm for HDFS NameNode was proposed,and it dynamically allocated the metadata into a NameNodes cluster with multiple copies,in order to improve the performance of metadata utilizations.In addition,the proposed algorithm increases the readability by the adoption of metadata caches,and improves the stability by a built-in failover mechanism.Finally,an experiment was carried out,to illustrate and evaluate the utilizations of the proposed algorithm.
Visual Custom Report Model Based on Single Data Source
TANG Jia, FU Yun-qing and WAN Xuan-min
Computer Science. 2017, 44 (5): 184-188.  doi:10.11896/j.issn.1002-137X.2017.05.033
Abstract PDF(412KB) ( 522 )   
References | Related Articles | Metrics
According to a new visualization method to custom report,the users can customize report style,and use XML technology to create database table dynamically,then the users can flexibly configure data in the relationship between the cells of reports and various fields.According to automatic assembling SQL,the system can quickly generate various types of reports based on single data source to meet the users’ needs,and it can be previewed and exported to Excel files.Finally,practical application applied in an actual project of Chongqing Transportation (Holding) Group Co.,Ltd.shows that the new visual custom report method will be able to quickly adapt to the changing needs of the reports,greatly improving the work efficiency and protecting the users’ investments.
Time-aware Cross-type Entity Recommendation in Heterogeneous Information Spaces
YANG Dan, CHEN Mo, WANG Gang and SUN Liang-xu
Computer Science. 2017, 44 (5): 189-192.  doi:10.11896/j.issn.1002-137X.2017.05.034
Abstract PDF(400KB) ( 499 )   
References | Related Articles | Metrics
With entity search has become a new trend of information retrieval,entity recommendation has also become one of the hottest research problems in industry and academia.Due to heterogeneous entities with rich associations in heterogeneous information spaces,so cross-type entity recommendation is vital.Moreover heterogeneous entities have time information and evolve over time in heterogeneous information spaces,users want to get the most time relevant entity recommendation.In this paper,a time-aware cross-type entity recommendation framework T-ERe was proposed,which leverages rich associations among different entity types and query log to realize cross-type entity recommendation.T-ERe considers temporal information of entities and recommends the most time-relevant various types of entities to users.Experimental results on two real data sets demonstrate the feasibility and effectiveness of T-ERe.
Optimal Control of Probabilistic Boolean Networks Using Model Checking
GUO Zong-hao and WEI Ou
Computer Science. 2017, 44 (5): 193-198.  doi:10.11896/j.issn.1002-137X.2017.05.035
Abstract PDF(632KB) ( 736 )   
References | Related Articles | Metrics
Systems biology expected to construct a realistic and computational model of complex biology systems that aims at system-level understanding of biological systems.One of the significant topics in the field of system biology is that the control theory of gene regulatory networks (GRNs) is developed by applying external intervention control for gene theory technologies in the future.At present,Boolean networks and extended probabilistic Boolean networks have been used as the model of GRNs widely.In the research of control problem,the state transition of probabilistic Boolean control networks essentially forms a finite-state and discrete-time Markov decision processes (MDP).According to MDP theory,finite-horizon optimal control problem and infinite-horizon optimal control problem can be solved by using probabilistic model checking.For content-sensitive probabilistic Boolean control networks with random perturbation,probabilistic model checker PRISM is used to model formally.Then two kinds of optimal control problems are expressed by temporal logic.Finally,the optimal solution is found via model checking.The results indicate that this proposed approach can be used for analysis and optimal control of biology networks effectively.
Granular Structure Reduction Approach to Multigranulation Decision-theoretic Rough Sets
SANG Yan-li and QIAN Yu-hua
Computer Science. 2017, 44 (5): 199-205.  doi:10.11896/j.issn.1002-137X.2017.05.036
Abstract PDF(515KB) ( 538 )   
References | Related Articles | Metrics
Multigranulation decision-theoretic rough set method (MG-DTRS) is a generalization of multigranulation rough set model through combining the decision-theoretic rough sets theory and the multigranulation idea,which is a data modeling method on decision-theoretic rough sets in the context of multiple granular spaces.Further,based on Baye-sian decision theory,we made a concrete analysis about probability fusion relations used optimistic or pessimistic fusion strategies on multiple granular spaces,also,the approximate representation of the maximum conditional probability rough sets and the minimum conditional probability rough sets were proposed respectively.And then the optimistic MG-DTRS model and the pessimistic MG-DTRS model were constructed.Furthermore,a concept of the approximate distribution reduction was introduced to MG-DTRS model,and the granular structure selection problem under multiple granular spaces was investigated.Based on the multiple granular approximate distribution quality proposed in this mo-del,the important measure of a granular structure was defined,and an α-lower approximate distribution reduction algorithm to obtain a granular structure reduction was designed under optimistic or pessimistic fusion strategies respectively.Finally,an example was employed for verifying the validity of the proposed algorithm.
Rough Set Based on “Conjunctive Logic” Operation of Variable Precision and Grade in Intuitionistic Fuzzy Ordered Information System
HU Meng, LI Meng-meng and XU Wei-hua
Computer Science. 2017, 44 (5): 206-210.  doi:10.11896/j.issn.1002-137X.2017.05.037
Abstract PDF(425KB) ( 444 )   
References | Related Articles | Metrics
In this paper,the weighted score function of intuitionistic fuzzy information system is defined by the comprehensive consideration of the membership degree,non-membership degree and hesitation degree of the element with respect of the set.Based on the score function,the dominance relation of intuitionistic fuzzy information is defined.Variable precision rough set and graded rough set are combined with “conjunctive logic” mode,and then the definition of the “conjunctive logic” rough set model is constructed and its properties are deeply explored.Finally,the analysis of practical example further reflects the significance of this study,which provides a new theoretical foundation for knowledge representation in ordered information systems.
Replica Exchange Based Local Enhanced Differential Evolution Searching Method in Ab-initio Protein Structure Prediction
LI Zhang-wei, HAO Xiao-hu and ZHANG Gui-jun
Computer Science. 2017, 44 (5): 211-217.  doi:10.11896/j.issn.1002-137X.2017.05.038
Abstract PDF(2008KB) ( 527 )   
References | Related Articles | Metrics
To address the searching problem in high-dimensional protein conformational space,a replica exchange based local enhanced differential evolution searching method in ab-initio protein structure prediction (RLDE) was proposed.In this paper,the knowledge-based coarse-grained energy model,Rosetta,was employed to considerably reduce the optimal variable dimension of protein conformational space;the knowledge-based fragment assembly technique was introduced to further reduce the dimension of protein conformational space.Thus the entropy effect caused by searching in high-dimensionality conformational space could be avoided.Additionally,a conformation population was put into every replica layer,differential evolution algorithm was adopted to update the population in each layer,and the updated populations were then enhanced by Monte Carlo method.As a consequence,the global optimal conformation and a series of metastable conformations were generated.In conclusion,RLDE can effectively search the global conformational space through the strong global searching ability of differential evolution algorithm.The well local searching performance of Monte Carlo is also employed to sample the local minimum area adequately.Replica exchange strategy ensures the diversity of population in replica layers,and the capacity of algorithm to jump out of local minimum is enhanced as well,thereby makes the searching ability further heightened.Test results of 15 target proteins show that the proposed method can generated high-resolution near-native protein conformations by searching the conformational space effectively.
Improved Water Wave Optimization Algorithm Based on Chaos Optimization and Simplex Method
WU Xiu-li and ZHOU Yong-quan
Computer Science. 2017, 44 (5): 218-225.  doi:10.11896/j.issn.1002-137X.2017.05.039
Abstract PDF(1270KB) ( 671 )   
References | Related Articles | Metrics
Water wave optimization (WWO) algorithm,which is based on shallow water wave theory,is a new nature-inspired metaheuristic,it simulates mimicking shallow water wave motions including propagation,breaking,refraction for globally search in solution space.In order to improve the convergence speed and accuracy of the algorithm,an improved version of water wave optimization which is based on chaos optimization and simplex method(SM) was represented in this paper.It is named CSMWWO.In CSMWWO algorithm,in order to reduce the population which is random initialization influences on the convergence speed and precision of the algorithm,the chaotic optimization strategy was introduced,meanwhile the simplex method which has strong local searching ability was introduced based on chaos optimization strategy to improve the convergence speed of WWO algorithm.CSMWWO is compared with 4 heuristic algorithms including WWO in 12 benchmark functions,experimental results show that the improved algorithm have a certain degree of improvement on calculation accuracy and convergence speed,the proposed hybrid wave optimization algorithm can improve the overall performance of the water wave optimization algorithm.
Building Hierarchical Topic Based on Heterogeneous Chinese Online Encyclopedia
WANG Xu-zhong, LIU Yan, HU Lin-mei and CHEN Jing
Computer Science. 2017, 44 (5): 226-231.  doi:10.11896/j.issn.1002-137X.2017.05.040
Abstract PDF(1251KB) ( 540 )   
References | Related Articles | Metrics
Chinese online encyclopedia carries a huge amount of high quality information.Previous studies have utilized it for different knowledge acquisition tasks.For instance,the articles with similar subjects are grouped together into ca-tegories.Constructing a certain category topical hierarchy from the online encyclopedia is significantly beneficial for many applications such as search and browsing,information organizing and information retrieval.However,no attempts have been made to explore topic hierarchy of given category in online encyclopedia.Considering most of the online encyclopedia is heterogeneous and rough,this paper proposed a novel scheme of constructing topic hierarchy based on the Bayesian network.This scheme will incorporate both the structured contents table and unstructured text descriptions in the articles of the same category into automatic topic hierarchy learning for the online encyclopedia category using the algorithm of maximum spanning tree on the Bayesian topic network.Experimental results show that,compared with the existed encyclopedia topical hierarchy,our approach expand the content of 4 times while maintaining the accuracy of 75%.
Improved Algorithm about Muti-shortest Path Problem Based on Floyd Algorithm
ZUO Xiu-feng and SHEN Wan-jie
Computer Science. 2017, 44 (5): 232-234.  doi:10.11896/j.issn.1002-137X.2017.05.041
Abstract PDF(315KB) ( 1299 )   
References | Related Articles | Metrics
Find the shortest path is the core of the path analysis,which is the fundamental problem of the network ana-lysis.Floyd algorithm is one of the most classical algorithms to solve the shortest path problem.Through analyzing practical problems,there maybe exist multiple shortest paths with the same weight that the Floyd algorithm are not addressed.This paper designed the multi-shortest paths algorithm for undirected graph based on Floyd algorithm and offered an example to exam the correctness of the algorithm.The experimental results show that the algorithm can effectively solve the problem of muti-shortest paths.
Adaptive Step-size Cuckoo Search Algorithm
LI Rong-yu and DAI Rui-wen
Computer Science. 2017, 44 (5): 235-240.  doi:10.11896/j.issn.1002-137X.2017.05.042
Abstract PDF(462KB) ( 1399 )   
References | Related Articles | Metrics
Cuckoo search algorithm (CSA) is a novel nature-inspired algorithm which is simple and efficient.To overcome the defections that standard algorithm has slow convergence rate and falls into local optimum easily in the later period,a new adaptive step-size cuckoo search algorithm(ASCSA) was proposed.By adjusting the step-size of lévy flight adaptively,the algorithm enhances the ability of global search in the earlier period and the local search in the later pe-riod.What‘s more,for the bias random walk,by introducing the dynamic inertial weight and memory strategy,the introduced algorithm can make full use of historical experience.The stability of algorithm has been strengthened.Simulation results show that the performance of ASCSA is obviously improved by compared with the standard CS algorithm and modified ones.
Approach to Identify Chinese Event Factuality
HE Tian-xiong, LI Pei-feng and ZHU Qiao-ming
Computer Science. 2017, 44 (5): 241-244.  doi:10.11896/j.issn.1002-137X.2017.05.043
Abstract PDF(447KB) ( 753 )   
References | Related Articles | Metrics
Event factuality refers to the level of event factual information expressed by event narrator and it is the foundation of natural language understanding.During the research,we focused on identifying Chinese event factuality and proposed an effective identification approach based on features.It extracts and transforms features from the factual related information.Meanwhile,considering the relationship between parts of features and event factuality,it makes a fusion of these features according to rules.Experimental results manifest that our approach achieves a higher performance than the rule-based approach for the task of event factuality identification.
Web Service Selection Method Based on Dynamic QoS
FANG Chen, WANG Jin-dong and YU Zhi-yong
Computer Science. 2017, 44 (5): 245-250.  doi:10.11896/j.issn.1002-137X.2017.05.044
Abstract PDF(509KB) ( 690 )   
References | Related Articles | Metrics
In order to deal with the Quality of Services(QoS)value fluctuations of Web services in dynamic environment,a service selection method based on dynamic QoS was proposed.Firstly,the method establishes an interval QoS model to represent the dynamic changes of QoS values.Then it uses the interval similarities to measure the proximity of the QoS values provided by candidate services to the QoS requirement values provided by consumers.Based on the concept of similarity,the objective weight of QoS criteria for each basic service is calculated using TOPSIS(Technique for Order Preference by Similarity to Ideal Solution)method for MADM(Multiple Attribute Decision Making)problems.After calculating the comprehensive weight by combining with the consumers’ subjective preference,recommendations are used to sort the candidate services.The experimental results show that this service selection method not only considers the consumers’ subjective preference,but also overcomes the impact of fluctuations in QoS values of Web services,and enhances the correctness of service selection.
Simulated Annealing for Weighted Localization Algorithm Based on LTE Directional Propagation Model
WANG Wei-hong, YAN Lu-qin and YANG Jie
Computer Science. 2017, 44 (5): 251-256.  doi:10.11896/j.issn.1002-137X.2017.05.045
Abstract PDF(1167KB) ( 717 )   
References | Related Articles | Metrics
According to the collected MRO(Measurement Report Original) data on LTE(Long Term Evolution),the simulated annealing for weighted localization algorithm based on the LTE directional propagation model was proposed.Firstly,combined with the LTE directional antenna and cell site characteristics,the directional propagation model based on RSS with introducing the direction type parameter was put forward,which improves the traditional signal propagation model COST-231 Hata.Then,the distance ratio weighted algorithm was proposed to eliminate the RSS(Received Signal Strength) fluctuation error,which transforms the positioning problem to the question of solving one variable.Afterwards,the simulated annealing algorithm was used to calculate the optimal solution.Finally, the elliptical distance model was used to correct the azimuth for the final terminal positioning result.Algorithm contrast experimental results show that the simulated annealing for weighted localization algorithm based on the LTE directional propagation model has higher positioning accuracy,which accords with the positioning accuracy of the FCC rules completely.
Fireworks Optimization Algorithm Based on Simulated Annealing and Gaussian Perturbations
HAN Shou-fei, LI Xi-guang and GONG Chang-qing
Computer Science. 2017, 44 (5): 257-262.  doi:10.11896/j.issn.1002-137X.2017.05.046
Abstract PDF(448KB) ( 800 )   
References | Related Articles | Metrics
Fireworks algorithm (FWA) is a kind of swarm intelligence optimization algorithm for solving complex pro-blems with a global capacity of the optimal solution.This paper introduced both simulated annealing and Gaussian perturbations into the standard FWA so as to improve its ability to solve global optimal solution.As a result,this paper proposed a simulated annealing Gauss fireworks algorithm (SAFWA) for global optimization.In our simulation,we compared FWA,SPSO,EFWA and SAFWA with 10 typical benchmark functions.The results show that SAFWA is better than FWA,SPSO,and EFWA in terms of convergence speed,accuracy and stability.
Optimization Study on Weapon-Target Assignment Problem in Air-defense Operation Based on Intuitionistic Fuzzy Hybrid Particle Swarm Optimization
MEI Hai-tao, HUA Ji-xue, WANG Yi and WEN Tong
Computer Science. 2017, 44 (5): 263-267.  doi:10.11896/j.issn.1002-137X.2017.05.047
Abstract PDF(413KB) ( 497 )   
References | Related Articles | Metrics
Weapon-Target assignment problem is a NP hard problem,which is a key procedure in air-defense operation.To improve the speed and precision of WTA,an intuitionistic fuzzy hybrid particle swarm optimization(IF-HPSO) was proposed.Firstly,this paper established the WTA optimization model with the resource constrain by consuming the least ammunition to intercept more threat object.Then,this paper introduced an intuitionistic fuzzy charisma function to exploit some better individuals to participate in the updates of velocity and location.The identical factor was defined to adjust the inertia weight and learning operator adaptively.Furthermore,the GA based on elitist reserving strategy was developed,and was combined with PSO to search optimazation.Finally,the simulation and comparison result of IF-HPSO with AIA,GA and HDPSO algorithm indicate IF-HPSO performs better in optimal speed and results.
Parameter Self-learning Method Based on Kalman Filter for Dam Deformation Prediction
ZHAN Peng-fei, LV Xin, MAO Ying-chi, XU Shu-fang, WANG Long-bao and MA Hong-xu
Computer Science. 2017, 44 (5): 268-271.  doi:10.11896/j.issn.1002-137X.2017.05.048
Abstract PDF(406KB) ( 662 )   
References | Related Articles | Metrics
Kalman filter is widely applied to dam deformation prediction.However,the identification of parameters to the model,especially the state and observation noise covariance matrices,is derived mostly from the experience of engineering or expert knowledge.Therefore,a self-learning method was proposed for parameter identifying,in which the parameters of Kalman filter are determined by the combination of Monte Carlo and rejection sampling algorithm from history data.More precisely,the state noise sorted out from training ones is evaluated by samples,whose observations approximate actual value completely,and the observation noise is determined by calculating the difference of the aforementioned noise and overall error.The experiment result shows that the proposed method is more accurate than other congener ones,and it’s more applicable to dam deformation prediction.
Vectorized Representation of Heterogeneous Network Based on Neural Networks
WU Wei-zu, LIU Li-qun and XIE Dong-qing
Computer Science. 2017, 44 (5): 272-275.  doi:10.11896/j.issn.1002-137X.2017.05.049
Abstract PDF(317KB) ( 714 )   
References | Related Articles | Metrics
When there are different types of objects in the network,the relationships between objects would be various,and the structure of the network would become even more complex.To deal with the problem heterogeneousness of networks,this paper proposed a neural network based vectorization representation method of the heterogeneous networks.For the heterogeneous network with two types of images and text,multi-hierarchical convolution neural network was adopted to map the original image into a latent feature space,and fully-connected neural network was adopted to map the text object into the same latent feature space.In the latent feature space,the similarities between images,texts,and even image and text can be calculated by the same distance computing method.In the experiment,the proposed method is applied to test a variety of applications in heterogeneous networks,and the results show that the proposed method is effective.
Adaptive Clustering Algorithm Based on Rank Constraint Density Sensitive Distance
REN Yong-gong, LIU Yang and ZHAO Yue
Computer Science. 2017, 44 (5): 276-279.  doi:10.11896/j.issn.1002-137X.2017.05.050
Abstract PDF(380KB) ( 489 )   
References | Related Articles | Metrics
The traditional clustering algorithms generally use Euclidean distance to acquire the similar matrix.In some more complex data processing,Euclidean distance doesn’t have the ability of describing the characters of data because it can’t reflect the global consistency.An adaptive clustering algorithm based on rank constraint density sensitive distance (RCDSD) was proposed in this paper.First,a density sensitive distance similarity measure is introduced to acquire the similar matrix which enlarges the distance between the different classes and reduces the distance between the same classes effectively,so as to solve the disadvantages of clustering results deviation of the traditional clustering algorithm based on Euclidean distance.Second,the rank constraint is imposed to the Laplacian matrix of the similarity matrix,thus the number of connected area of the similar matrix is equal to the number of clustering,and the data can be directly divided into the right class and the algorithm can take the final clustering result,while the algorithm does not need to perform k-means or other discrete procedure.Experimental results show that the approach can obtain accurate clustering results and improve the clustering performance on both artificial simulation data sets and real data sets.
Word Semantic Relevancy Computation and Categories Identification Based on Chinese Compound Sentences
YANG Jin-cai, CHEN Zhong-zhong, SHEN Xian-jun and HU Jin-zhu
Computer Science. 2017, 44 (5): 280-284.  doi:10.11896/j.issn.1002-137X.2017.05.051
Abstract PDF(399KB) ( 662 )   
References | Related Articles | Metrics
As a critical technique in the field of Chinese information processing,word semantic relevancy computation plays an important role in information retrieval,ambiguity elimination,and text processing.Using syntactic theory and the collocation theory of the relation markers of Chinese compound sentences,as well as making the corpus of Chinese compound sentences and some compound sentences from search engine as the data resource,a semantic relevancy computation method was proposed based on Chinese compound sentence (SRCCS).This method can not only compute the word semantic relevancy,but also show the property and category of the word semantic relevancy.Compared with the method of short text semantic relevancy,this method chooses a smaller scope of evaluation objects,so the results are more accurate and have little computational complexity.Compared with the result by Google Distance,the new measure is more reliable and effective.
Improved Bayesian Probabilistic Model Based Recommender System
LIU Fu-yong, GAO Xian-qiang and ZHANG Zhu
Computer Science. 2017, 44 (5): 285-289.  doi:10.11896/j.issn.1002-137X.2017.05.052
Abstract PDF(393KB) ( 994 )   
References | Related Articles | Metrics
Aiming at the problem that matrix factorization based collaborative filtering recommender systems perform low accuracy in prediction and recommendation,a improved matrix factorization method and collaborative filtering recom-mender system were proposed.Firstly,the rating matrix is factorized into two non-negative matrices,and the rating results are normalized to show probabilistic meaning.Then,variational inference is used to compute the distribution of the real posterior distribution of Bayesian model.Lastly,the user groups with the same preference are searched and the preferences of each user are predicted.Besides,a recommendation result decision algorithm with low computational complexity and low storage overhead was designed based on the sparsity of the user vectors.Three public datasets based experimental results show that the proposed algorithm has better performance than other algorithms in prediction accuracy and recommendation effect.
u-Block Optimization Algorithm for 2-D Strip Packing Problem
HUANG Hai and LI Song-bin
Computer Science. 2017, 44 (5): 290-293.  doi:10.11896/j.issn.1002-137X.2017.05.053
Abstract PDF(1160KB) ( 634 )   
References | Related Articles | Metrics
The 2-D strip packing problem is the problem of loading a subset of a given set of rectangular boxes with profits into a rectangular container so that the stowed profits is maximized.The paper presented u-Block optimization algorithm to find more efficient parameter of block for any value.A new data structure was introduced for different classes of rectangles.At last,u-points of the X axis of the different kinds of rectangular boxes were filled to the maximum profits of the items.Two algorithms were developed,one is an effective block parameter u,another is block selection algorithm based on a simplified data structure.In particular,we presented polynomial time approximation for any value. In the case of polynomial time complexity and ε,we can reduce the required box width from 1+ε to 1,while the beight remains the same.
Foreground Object Detection Based on Improved PBAS
WANG Rong-qi, ZHENG Lin and WANG Biao
Computer Science. 2017, 44 (5): 294-298.  doi:10.11896/j.issn.1002-137X.2017.05.054
Abstract PDF(1232KB) ( 607 )   
References | Related Articles | Metrics
To avoid the problems of PABS (Pixel Based Adaptive Segmenter) such as low accuracy rate of detection,replacement of immobile of slow-moving foreground objects by background ones,and the interference of ghosting,this paper presented a new PBAS which is improved by merging pixel-level information and region-level information.A mea-sure of background dynamics fused with structural information and color information at region level is computed firstly.And then this measure will be used to estimate and control the threshold and learning rate,as well as to detect the foreground object.A spatial neighborhood contrast on pixel-level result is computed in order to solve the interfe-rence of ghosting,and a foreground count machine is introduced to avoid the missing of static object in foreground.The experiment results indicate that the algorithm is insensitive to brightness and velocity of objects,thus the foreground object can be detected effectively,and interference of ghosting can be removed quickly with a high accuracy detection rate 92.7% .
Image Denoising Method of Spectrum Clustering Based on Non-local Similarity
KE Zu-fu, YI Ben-shun and XIE Qiu-ying
Computer Science. 2017, 44 (5): 299-303.  doi:10.11896/j.issn.1002-137X.2017.05.055
Abstract PDF(1083KB) ( 776 )   
References | Related Articles | Metrics
The conventional image denoising algorithms just make use of the prior information of the natural image or the noise image alone,without effective combination of the prior imformation of two images to realize the image denoi-sing.For this problem,a novel image denosing method which joins the prior information of the natural image and the non-local similarity of the noise image was proposed in this paper.Firstly,the similar blocks in natural image are clustered in the same class by the spectrum clustering,and the result of the spectrum clustering with the natural image is used to get the clustering of the noise image blocks.Then,the gotten same class blocks of the noise image are vectorized as a low-rank matrix.Secondly,the low-rank approximation process is adopted on the matrix to estimate the relative original image data.Finally,the original image can be reconstructed by the estimated image data.The experimental results show that compared with the RNL(adaptive regularization of the NL-Means) and LPG-PCA(two-stage image denoising by principal component analysis with local pixel grouping),the proposed algorithm can provide significant performance improvement with respect to both PSNR and local information preservation,which produces better denoising effect.
Speech Endpoint Detection Algorithm Based on Sub-band Energy-entropy-ratio
ZHANG Yi, WANG Ke-jia, XI Bing and YAN Bo
Computer Science. 2017, 44 (5): 304-307.  doi:10.11896/j.issn.1002-137X.2017.05.056
Abstract PDF(366KB) ( 865 )   
References | Related Articles | Metrics
It is an important step of speech recognition process to identify accurately the speech endpoint.Under the environment with low SNR,in order to enhance the discrimination of noise better and improve the accuracy of speech endpoint detection system,this paper proposesd a new type of speech endpoint detection algorithm based on sub-band energy-entropy-ratio.The proposed algorithm takes the ratio of short-time sub-band energy and sub-band spectral entropy as an important parameter of endpoint detection,and sets the threshold to speech endpoint detection.Experiments show that the algorithm is fast and efficient,and it also has strong robustness and can detect the voice endpoint under lower SNR accurately.
Multi-object Segmentation of Image Scene Based on Object Recognition and Saliency Detection
LI Qing, YUAN Jia-zheng and LIU Hong-zhe
Computer Science. 2017, 44 (5): 308-313.  doi:10.11896/j.issn.1002-137X.2017.05.057
Abstract PDF(3273KB) ( 507 )   
References | Related Articles | Metrics
This paper proposed a multi-object segmentation method of image scene based on object recognition and salien-cy detection.The object detector is learned on the training set,and then is used to locate the object in the test image with visualization of its bounding box.The test image is over-segmented into a set of superpixels.According to the location of bounding box and the superpixel-level propobilities,the region of interest is fixed.Then,a saliency map is obtained through a three-scale saliency detection.In the region of interest,a CRF model is established among the neighbo-ring superpixels,whose nodes indicate the superpixels and edges indicate their neighborhood.The saliency of a superpi-xel is embedded into the weight of relative node,and the feature difference between two neighboring superpixels is embedded into the weight of relative edge.Thus,the multi-object segmentation task is transformed into a multi-labeling task.Finally,the CRF formulation is optimized using graph cut algorithm to get the multi-object segmentation result.The experimental results show the good performance of our method.
Study on Watershed Algorithm Applied to Active Contour Model Energy Segmentation Algorithm
WANG Mei, LI Lin, WANG Bin and HE Gao-ming
Computer Science. 2017, 44 (5): 314-319.  doi:10.11896/j.issn.1002-137X.2017.05.058
Abstract PDF(2401KB) ( 810 )   
References | Related Articles | Metrics
Active contour model (snake model) is widely used in edge detection,image segmentation and other fields.The model is able to initialize the target and autonomous convergence,so that the energy in the state achieves the minimum target separation.When the target initial position is sensitive,it needs to rely on other mechanisms for the internal energy reasonable initialization,and dues to the non-convexity model,and it is possible to converge to a local extreme point even diverge.This article used watershed algorithm to the energy of active contour model segmentation algorithm to determine the initial contour active contour models through improved watershed algorithm,and used iteration of the local neighborhood around the point contour points to select smaller contour retrieval model.When the minimum value is gotten the extraction of target contour is completed.