Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 45 Issue 5, 15 May 2018
  
Research on Definition and Technological System of Cyberspace Surveying and Mapping
ZHOU Yang, XU Qing, LUO Xiang-yang, LIU Fen-lin, ZHANG Long and HU Xiao-fei
Computer Science. 2018, 45 (5): 1-4.  doi:10.11896/j.issn.1002-137X.2018.05.001
Abstract PDF(4680KB) ( 3478 )   
References | Related Articles | Metrics
Carrying out the research of geo-cyberspace and applying the theories,methods and technologies of geo-mapping to cyberspace situational awareness,have been the hot issues that researchers of survey-mapping and network focus on.Firstly,this paper expounded the concept of geospatial space and cyberspace respectively.After reviewing the deve-lopment and research status of geo-cyberspace,the concept of cyberspace mapping was proposed for the first time.Then,the research range of surveying and mapping of cyberspace was described,and so were the function and statement.The technological system of network space mapping was sorted out,and the involved key technologies were analyzed.At last,this paper focused on the research of geo-cyberspace mapping technologies and gave the preliminary test results.The research shows that cyberspace provides thinking and theories for geography as a new space and thing.Cyberspace mapping research has important significance for constructing and enriching the surveying and mapping scienti-fic theory.Development and application of methods and technologies used in geo-mapping can help network researchers and managers detect network resources and perceive network situation more timely and precisely.
Review of Crash Exploitability Analysis Methods
ZHANG Jing, ZHOU An-min, LIU Liang, JIA Peng and LIU Lu-ping
Computer Science. 2018, 45 (5): 5-14.  doi:10.11896/j.issn.1002-137X.2018.05.002
Abstract PDF(1417KB) ( 1341 )   
References | Related Articles | Metrics
Fuzzing technology is the main technology used in the current stage of vulnerability mining,and currently the majority of software vulnerabilities are discovered by using this technology.However,one of the main problems about Fuzzing technology is that it will produce a large number of crash samples,and how to quickly analyze these crash samples is the main problem of using Fuzzing technology for vulnerability mining work.This paper focused on the researches of crash exploitability.Firstly,it summarized the causes of crash and discussed the development status of its analytical technology,and then it seriously analyzed four effective methods of crash availability judgment by using dynamic taint analysis,symbol execution and other techniques.Finally,it compared the differences between the four methods,and explored the future development direction and trend of the crash exploitability analysis techniques.
Advances in Image Reranking
ZHAO Xiao-yan, LIU Hong-zhe, YUAN Jia-zheng and YANG Shao-peng
Computer Science. 2018, 45 (5): 15-23.  doi:10.11896/j.issn.1002-137X.2018.05.003
Abstract PDF(5804KB) ( 1770 )   
References | Related Articles | Metrics
In recent years,with the rapid development of Internet technology and the popularity of multimedia terminals electronic products,improving the efficiency of image search is a challenge in the media retrieval.The research of image search is a hot issue in the field of image.At present,many current commercial image search techniques have been applied,but the search results cannot meet the needs of users because of the existence of “semantic gap”.The search results still have some noise.Image search reordering is helpful to solve this problem.Based on the initial search,results can be more accurate and more abundant after reranking.In this paper,the research progress of image search reranking technology was introduced,and the current research methods were summarized and analyzed.The advantages and disadvantages of these methods and the key technologies in recent years were compared.The latest research progress and the future development of image search reranking and the future development were also given.
Survey of Graph Sparsification Algorithms for Complex Networks
XU Li-li, DONG Yi-hong, PAN Jian-fei and CHEN Hua-hui
Computer Science. 2018, 45 (5): 24-30.  doi:10.11896/j.issn.1002-137X.2018.05.004
Abstract PDF(1549KB) ( 2418 )   
References | Related Articles | Metrics
With the increasing of data scale,traditional graph algorithms confront the challenge of excessive complexity.The idea of graph sparse was introduced into algorithm,and analytical algorithm was realized efficiently on the sparse graph while preserving the original property with certain accuracy precision.Graph sparsity algorithm is a sampling method which preserves the original vertexes and sparses the edges.In this paper,the latest progresses of graph sparse algorithm were reviewed from four aspects,such as sparse spanning algorithm,edge connectivity sparse,clustering sparse and influence propagation sparse.The advantages and disadvantages of the graph sparse algorithms and the adaptability were summarized in different edge metrics.In addition,dynamic graph streaming sparse methods were analyzed.Finally,several important problems were outlined and future research directions in the field of sparse complex network were prospected.
Evoluation Genetic Algorithm of Multi-objective Optimization Scheduling on Cloud Workflow
WANG Guo-hao, LI Qing-hua and LIU An-feng
Computer Science. 2018, 45 (5): 31-37.  doi:10.11896/j.issn.1002-137X.2018.05.005
Abstract PDF(6486KB) ( 955 )   
References | Related Articles | Metrics
To implement the synchronous optimization of makespan and execution cost of scientific workflow scheduling in cloud environment,this paper proposed a multi-objective optimization evoluation genetic scheduling algorithm named MOEGA.Based on evoluation genetics,MOEGA defines the encoding mechanism of the mapping between tasks and virtual machines,virtual machines and hosts placement,and designs the fitness function satisfying multi-objective optimization.Meanwhile,for meeting the diversity of population,the crossover operation and mutation operation are introduced into the scheduling scheme,and the heuristics is used to initialize the population.Through the experimental tests of four types of scientific workflow in reality,its performance was compared with the same types of algorithms.The results show that MOEGA not only can meet the deadline constraint of workflow,but also outperforms other algorithms in overall performance of reducing the execution makespan and execution cost.
WSN Wireless Data Transceiver Unit Fault Diagnosis with Fuzzy Neural Network
XUE Shan-liang, YANG Pei-ru and ZHOU Xi
Computer Science. 2018, 45 (5): 38-43.  doi:10.11896/j.issn.1002-137X.2018.05.006
Abstract PDF(2028KB) ( 854 )   
References | Related Articles | Metrics
In some wireless sensor network(WSN) security monitoring systems,the nodes transfer large amounts of data in a long time,which causes the phenomenon of power decreasing and power amplifier(PA) being burned in the wireless data transceiver unit,but this kind of fault diagnosis method is generally complex and inefficient.In order to solve these problems,based on the analysis of WSN cell-level fault diagnosis,this paper proposed a fault diagnosis method based on fuzzy neural network by using the current model of wireless data transceiver unit.Firstly,according to the relationship between the emission current,the temperature and the supply voltage,the current model is established.Then,the fuzzy neural network model structure is determined by the clustering algorithm,and the hybrid learning algorithm is used to optimize the front and rear parameters of fuzzy rules.Finally,the fuzzy neural network parameters are extracted to establish the WSN node fault diagnosis model.The experimental results show that the presented fault diagnosis method of wireless data transceiver unit possesses low computational complexity and high diagnostic accuracy.Compared with Gaussian process regression model,the computational complexity of this method is reduced by 22.4%,and the diagnostic accuracy is increased by 17.5%.
Clustering Method in Wireless Sensor Networks Using Nonlinear Adaptive PSO Algorithm
LI Tong-yue and MA Wen-ping
Computer Science. 2018, 45 (5): 44-48.  doi:10.11896/j.issn.1002-137X.2018.05.007
Abstract PDF(1630KB) ( 912 )   
References | Related Articles | Metrics
How to prolong the network lifetime is an important factor when designing a routing protocol in wireless sensor network.To solve this problem,a novel clustering algorithm based on the improved particle swarm optimization was presented.The algorithm modifies the inertial weight to avoid particles trapping in local optimum.It also takes into account both energy balance and transmission distance,and cooperates relays nodes with cluster heads to reduce the excessive energy consumption of cluster heads.This paper compared the proposed algorithm with other algorithms in various scenarios.Simulation results show that the proposed algorithm has good capability on distributing nodes and balancing cluster system.
Resource Allocation Algorithm for Cognitive Heterogeneous Networks Based on Imperfect Spectrum Sensing
ZHUANG Ling and YIN Yao-hu
Computer Science. 2018, 45 (5): 49-53.  doi:10.11896/j.issn.1002-137X.2018.05.008
Abstract PDF(1737KB) ( 680 )   
References | Related Articles | Metrics
In order to solve the problem about interference mitigation in the cognitive heterogeneous networks,this paper studied how to reduce the interference to macrocell users(MU) and improve system throughput.By analyzing the source of interference completely,the interference model with imperfect spectrum sensing is established.Based on the user’s topology information,the optimize problem is built to maximize the downlink throughput with considering total power constraint and interference constraint.Then the problem is simplified based on the analysis of Karush-Kuhn-Tucker(KKT) conditions,and the resource allocation algorithm is designed with the imperfect spectrum sensing.Simulation results and performance analysis show that the proposed algorithm has less interference to MU than the algorithm with perfect spectrum sensing,and achieves better throughput performance.
Anchor Selection and Distributed Topology Preserving Maps in Wireless Sensor Networks
SU Tao, GU Jing-jing and HUANG Tao-tao
Computer Science. 2018, 45 (5): 54-58.  doi:10.11896/j.issn.1002-137X.2018.05.009
Abstract PDF(1620KB) ( 799 )   
References | Related Articles | Metrics
Topology preserving maps(TPMs),as a distorted version of physical map,have been widely applied in routing,localization and boundary node identification of wireless sensor networks.It can generate topology maps of networks from a virtual coordinate system without any physical distance information.However,the TPMs can suffer from suboptimal result when it comes to some complex networks with irregular boundary and insufficient anchor nodes are selected to map the networks.To this end,this paper developed a new topology preserving model,named multiple extreme node search-distributed topology preserving maps(MENS-DTPM),which consists of a new anchors selection method and a new distri-buted topological coordinates producing algorithm based on TPMs.This method achieves more effective selection of anchors,and can express the physical map better.Simulation results show that the MENS-DTPM method achieves better performance than other methods reported in the literature.
Geographic Routing Algorithm Based on Location Prediction in WSN
WANG Zhen-chao, HOU Huan-huan and LIAN Rui
Computer Science. 2018, 45 (5): 59-63.  doi:10.11896/j.issn.1002-137X.2018.05.010
Abstract PDF(1464KB) ( 639 )   
References | Related Articles | Metrics
In order to improve the performance of network communication in high dynamic wireless sensor network,a new technical scheme(LPESGR) was presented. First,an energy-saving geographic routing algorithm ESGR and a node localization and prediction algorithm combining GPS with RSSI were provided. Then,a routing real-time search algorithm based on energy efficiency was proposed to search the actual route with the least energy consumption on the basis of the above two algorithms. At last,a new solution for the routing hole was proposed,which can avoid the shortco-mings of the traditional scheme. In addition,for the purpose of increasing the energy utilization rate and reducing the probability of route disruption,a new power real-time adjustment scheme based on the prediction distance of signal transmission was put forward. Simulation results demonstrate that the scheme can effectively reduce the network energy consumption and increase the success rate for data transmission.
Digital Modulation Signal Recognition Method Based on ST-RFT Algorithm
LIU Dan, MA Xiu-rong and SHAN Yun-long
Computer Science. 2018, 45 (5): 64-68.  doi:10.11896/j.issn.1002-137X.2018.05.011
Abstract PDF(1623KB) ( 777 )   
References | Related Articles | Metrics
In this paper,the short-time Ramanujan Fourier transform(ST-RFT) algorithm was applied in the research of modulation signal recognition to obtain a new method which can improve the correct recognition rate in low signal-to-noise ratio(SNR) environment.The modulation signals recognition was achieved by normalized ST-RFT spectrogram calculation,characteristic parameters extraction and threshold decision.The simulation results show that when SNR is 0 dB,the average correct recognition rate of the method based on ST-RFT algorithm can reach 90%,which is increased by 10.4% compared with that based on spectrogram time-frequency analysis algorithm.Especially,the correct recognition rate of the proposed method for 4FSK signal can increased by 9% compared with that of the method based on instantaneous amplitude and instantaneous frequency features.The effectiveness and reliability of the proposed method in low SNR are proved by simulation results.
Opportunistic Routing Algorithm Combining Intra-session Coding and Inter-session Coding in Wireless Network
HAN Li and QIAN Huan-yan
Computer Science. 2018, 45 (5): 69-74.  doi:10.11896/j.issn.1002-137X.2018.05.012
Abstract PDF(1831KB) ( 685 )   
References | Related Articles | Metrics
This paper proposed a maximum weight scheduling framework in wireless mesh network.In the framework,the virtual queues of coding packets (named “credit queue”) are updated according to a credit assignment algorithm.A node chooses the way of encoding according to its credit queues to achieve network utility maximization and fair resource allocation between flows.A heuristic algorithm MiiCode was given.In this algorithm,no deterministic routing is needed,which makes it more flexible and helpful for finding more chances for inter-session coding.Intra-session’s capacity of local reparation decreases the number of packets retransmitted from source nodes,and the total cost of the network is decreased.This paper compared MiiCode with COPE and LOR based on routing in simulation results of OMNET++.
Intersection-relay-assisted Routing Scheme in VANETs
SUN Hai-feng and SONG Li-li
Computer Science. 2018, 45 (5): 75-78.  doi:10.11896/j.issn.1002-137X.2018.05.013
Abstract PDF(1353KB) ( 669 )   
References | Related Articles | Metrics
For the characteristics of fast moving vehicles and rapid changing network links in vehicular Ad hoc networks,local maximum is often happened at road intersections in previous routing schemes.Based on road characteristics in urban environments,and supported by physical infrastructure and electronic conditions of the traffic light system,an intersection-relay-assisted routing(IRAR) scheme was proposed.By building a stochastic model of the road message delay,the global optimum forwarding path is got for each message,and local maximum is solved by relay assisted infrastructure.Furthermore,intersection forward mode and greedy straight way mode are designed according to the intersection location and straight way location of messages,separately.Finally,simulations are conducted to compare the proposed IRAR and the state-of-the-art schemes.Simulation results suggest that IRAR outperforms the compared schemes significantly in delivery ratio and delay.
Study and Design of Interleaver for Repeat Accumulate Codes
TIAN Xiao-yan, WEI Na, FAN Ze-ming and ZHANG Suo-liang
Computer Science. 2018, 45 (5): 79-82.  doi:10.11896/j.issn.1002-137X.2018.05.014
Abstract PDF(1557KB) ( 953 )   
References | Related Articles | Metrics
Repeat accumulate code combines the advantages of both Turbo and LDPC codes,and becomes a hot topic in recent years,because it has the same linear encoding method as Turbo codes and is similar to LDPC codes in the linear decoding algorithm.In order to improve the performance of repeat accumulate codes,this paper optimized the design of interleaver.It designed a parity block interleaver based on several common interleavers and analyzed the performance of the BP decoding algorithm in the AWGN channel.The simulation results show that the performance of the parity block interleaver is better than that of the block interleaver.Compared with block interleaver,the parity block interleaver reduces the correlation between information sequences greatly,avoids the generation of two kinds of four rings in the check matrix,and improves the reliability of information transmission in the channel.
Link Burstiness-aware Opportunistic Routing Protocol in Wireless Networks
XU Wen-hao, SHEN Hang and BAI Guang-wei
Computer Science. 2018, 45 (5): 83-88.  doi:10.11896/j.issn.1002-137X.2018.05.015
Abstract PDF(1655KB) ( 742 )   
References | Related Articles | Metrics
The existing wireless link spatial correlation-based opportunistic routing metrics cannot perceive link burstiness.To address this problem,a burstiness-aware metric,called μETX,was proposed,and based on this,a new opportunistic routing algorithm(ORALB) was introduced for link correlated wireless network.ORALB sufficiently exploits wireless link correlation by choosing low correlated nodes as candidate forwarder set,meanwhile,it is a burstiness-aware opportunistic routing,which can avoid selecting wireless links with high-transmission cost.Simulation results show that,compared with other relevant routing protocols,ORALB performs more efficiently on decreasing average packets transmission cost and improving transmission reliability.
QEMU Based Abnormal Communication Analysis of Linux Applications
AO Quan, LU Hui-mei, XIANG Yong and CAO Rui-dong
Computer Science. 2018, 45 (5): 89-96.  doi:10.11896/j.issn.1002-137X.2018.05.016
Abstract PDF(1526KB) ( 2298 )   
References | Related Articles | Metrics
This paper presented a semi-automatic analysis method based on QEMU emulator(Socket Analysis based on QEMU,SAQ),which can be used to detect covert communication of elf format program on Linux platform and prevent information leakage.By modifying QEMU,a dynamic tracing tools QEMU-TRACER was developed,which can locate the suspicious communication functions in the application using QEMU-TRACER.Utilizing binary rewriting,the suspicious functions were disabled one by one,and then the behaviors of program before and after modification were compared to determine and clear the abnormal communication.Experiments of OpenSSH and ProFTPD show that SAQ can detect the abnormal communication behaviors and succeed in disabling them.
Risk Observing Method Based on Short-time Multi-source Regression Algorithm on P2P Platform
LIU Pan, LI Hua-kang and SUN Guo-zi
Computer Science. 2018, 45 (5): 97-101.  doi:10.11896/j.issn.1002-137X.2018.05.017
Abstract PDF(2085KB) ( 763 )   
References | Related Articles | Metrics
Peer-to-Peer(P2P) lending is a popular lending way in the field of contemporary Internet finance.There are small lending amount and different repayment cyclelengths,which may easily cause the loss of platform investors due to annual risk assessment.This paper proposed a method to dynamically evaluate the operation risk of P2P platforms based on short-time multi-source regression algorithm.In this algorithm,dynamic time windows are used to split up the len-ding records and linear regression method is used to quantify the dynamic risk index of P2P platforms.The experimental results show the method can reflect the visible operation situation of platforms,and can provide dynamic risk assessment and forecast indicators of the platforms to investors.
Data Security Mix Transmission Mechanism in Body Area Network
WANG Wei-xing
Computer Science. 2018, 45 (5): 102-107.  doi:10.11896/j.issn.1002-137X.2018.05.018
Abstract PDF(1677KB) ( 722 )   
References | Related Articles | Metrics
Wireless body area network,as a new type of network,is widely used in health care,emergency rescue and other fields.However,the data collected by wireless body area network are mostly related to user’s personal information.In the actual application of the deployment process,it is bound to face serious sensitive privacy data leakage or malicious tampering risks.In order to protect the privacy security of user data in the body area network environment,this paper proposed a data security mix transmission mechanism,which is a K-anonymous mix transmission algorithm based on single-loop model (AMT-ML).The main idea is to divide the communication nodes into several communication loop networks,and apply the K anonymous scheme to the perceptual layer and the transport layer of hierarchical model.In order to ensure that the user data in each layer are relatively safe,this paper used multiple K anonymous processing.The experimental results show that the privacy protection of AMT-ML is superior to K-anonymous mix transmission algorithm based on single-loop model (AMT-SL) algorithm,and the communication distance of the multi-loop mode in the two layer network is less than that of the single-loop mode.
Collaborative Filtering Recommendation Algorithm Based on Multiple Trust
YU Yang, YU Hong-tao and HUANG Rui-yang
Computer Science. 2018, 45 (5): 108-115.  doi:10.11896/j.issn.1002-137X.2018.05.019
Abstract PDF(2932KB) ( 719 )   
References | Related Articles | Metrics
Aiming at the reduction of accuracy and coverage for collaborative filtering recommendation system caused by sparseness of scoring data and cold start of users,this paper integratesd the explicit trust and implicit trust factors,and proposed a collaborative filtering recommendation algorithm based on multiple trust.Firstly,an improved Mean Squared Difference(MSD) trust metric method was proposed based on the accuracy and dependability factor of the recommended scores among users.Based on this,a scoring model based on implicit trust information was proposed.Secondly,regar-ding the maximum trust propagation distance as the constraint,a relational model of explicit trust information was proposed.Finally,based on the similarity between the score and the explicit trust,the optimal neighbor set of the target user was selected by the 0-1backpack combination optimization strategy,and the scoring prediction was carried out.Comparisons of the simulation results with a variety of state-of-the-art algorithms on Epinions dataset demonstrate that the proposed algorithm can greatly alleviate the data sparsity and cold start problems by introducing effective score and explicit trust relationship,and significantly improve the recommendation accuracy while preserving good coverage.
Efficient Parallel Algorithm of Fully Homomorphic Encryption Supporting Operation of Floating-point Number
SHI Jing-qi, YANG Geng, SUN Yan-jun, BAI Shuang-jie and MIN Zhao-e
Computer Science. 2018, 45 (5): 116-122.  doi:10.11896/j.issn.1002-137X.2018.05.020
Abstract PDF(1420KB) ( 1277 )   
References | Related Articles | Metrics
The rapid development of cloud computing provide convenience to people,as well as security problems,such as privacy preserving.One of the major ways to solve this problem is to utilize fully homomorphic encryption(FHE) algorithm to support operations on the encrypted data directly.However,because most fully homomorphic encryption schemes only support limited data types for the time being,it is difficult to apply them to reality.In this paper,a fully homomorphic encryption algorithm supporting floating-point operations and a parallelization algorithm based on Spark were proposed.The security and performance of the parallel algorithm were analyzed in theory and experiments were conducted to demonstrate its practical performance.Experimental results show that the overall speed-up ratio of the gi-ven algorithm can reach 3.9 in a 4-node 16-core cluster and the encryption time and calculation time on encrypted data can be reduced effectively.The parallel fully homomorphic encryption algorithm can satisfy the encryption requirement of large-scale data in cloud environment.
Study on Secure Retrieval Scheme over Encrypted Data Supporting Result Ranking
YAO Han-bing, XING Na-na, ZHOU Jun-wei and LI Yong-hua
Computer Science. 2018, 45 (5): 123-130.  doi:10.11896/j.issn.1002-137X.2018.05.021
Abstract PDF(3251KB) ( 954 )   
References | Related Articles | Metrics
More and more organizations and users outsource their data into the low-cost and high-quality cloud storage.In order to protect the sensitive data,users will encrypt them before deployment,but this will bring big challenge to retrieval.This paper modified the traditional inverted index into the encrypted inverted index,and then built counting Bloom filter(CBF) on the encrypted inverted index.It proposed secure index based on counting Bloom filter(SICBF) to search encrypted data,which meet strict keyword and index privacy requirements.It also designed the CBF pruning algorithm to reduce redundancy of SICBF index.In order to protect the privacy of the relevance score(RSC) in SICBF,it employed one-to-many order-preserving mapping encryption(OPME) to encrypt RSC.The search results are directly sorted on the encrypted RSC,and only the most relevant top-k documents are returned to the authorized users.Security analysis illustrates that the encrypted RSC distribution is different from the original RSC,which hides the peak value of the RSC to prevent statistical attacks.The experiments show that the SICBF has high retrieval efficiency and low computational cost,which is suitable for searching massive encrypted data.
MACSPMD:Malicious API Call Sequential Pattern Mining Based Malware Detection
RONG Feng-ping, FANG Yong, ZUO Zheng and LIU Liang
Computer Science. 2018, 45 (5): 131-138.  doi:10.11896/j.issn.1002-137X.2018.05.022
Abstract PDF(1663KB) ( 1759 )   
References | Related Articles | Metrics
Researchers give preference to dynamic analysis based malware detection methods with capability of nullifying the effects of polymorphism and obfuscation on malware and detecting new and unseen malwares.In this case,malware authors embed numerous anti-detection functions in to malware to evade the detection of existing dynamic malware detection methods.To solve this problem,a malware detection method MACSPMD based on malicious API call sequential pattern mining was proposed.Firstly,dynamic API call sequences of the files are gotten by real machine which simu-lates the actual running environment of the malware.Secondly,the malicious API call sequence patterns that can represent the potential malicious behavior patterns are mined by introducing the concept of objective-oriented association mining.Finally,the malicious API call sequences are used as abnormal behavior feature to detect malware.The experimental results based on real data set show that MACSPMD achieves 94.55% and 97.73% of detection accuracy on unknown and evasive malware respectively.Compared with other malware detection methods based on API call data,the detection accuracy of unknown and evasive malware is improved by 2.47% and 2.66% respectively,and the time consumed in the mining process is less.MACSPMD can effectively detect known and unknown malware,including escape type.
Secure Comparator Based Privacy-preserving Sorting Algorithms for Clouds
REN Hui, DAI Hua and YANG Geng
Computer Science. 2018, 45 (5): 139-142.  doi:10.11896/j.issn.1002-137X.2018.05.023
Abstract PDF(1458KB) ( 796 )   
References | Related Articles | Metrics
Data outsourcing is accepted by more and more companies and individuals due to its profits on low costs of resource configuration and maintenance.However,it makes owners lose control of their owned data,which cause privacy-preserving problems of sensitive data.Sorting is a common operation in computer applications.It is a challenge to implement privacy-preserving sorting over encrypted data without leaking plaintext.This paper proposed secure comparator based privacy-preserving sorting algorithms.Secure comparator is constructed by 0-1 coding and HMAC techniques,which can be used to compare two data items without knowing their real values.Data owners firstly encrypt their data and then outsource the generated corresponding secure comparators into clouds.Cloud servers can sort the outsourced encrypted data according to their corresponding secure comparators by using the proposed privacy-preserving sorting algorithms.Experiment results show that the proposed privacy-preserving sorting algorithms have better performance on time and space metrics than other algorithms.
Query Probability Based Dummy Location Selection Algorithm
WU Zhong-zhong, LV Xin and LI Xin
Computer Science. 2018, 45 (5): 143-146.  doi:10.11896/j.issn.1002-137X.2018.05.024
Abstract PDF(1751KB) ( 643 )   
References | Related Articles | Metrics
Location-based service(LBS) has become a vital part in daily life.While enjoying the convenience providedby LBS,users may have high risk of losing privacy.Traditional K-anonymity for location privacy does not consider that adversaries have some problems of background knowledge and side information.Therefore,this paper proposed an improved dummy locations selection algorithm to protect location privacy.Firstly,the space is divided into grids,and historical statistics are utilized to obtain the probability of queries of each grid of space.Then,the dummy locations are selected for users in order that the(K-1) grids has the same query probability as far as possible corresponding with the grid where the real user exists and the K locations are spread as far as possible.A series of experiments on synthetic datasets show that this algorithm is feasible and effective.
Saas Service Evolution Consistency Checking with Tenant Tolerance
WANG Xiao-fang, XIE Zhong-wen, LI Tong, CHENG Lei, ZHENG Jiao-jiao and LIU Xiao-fang
Computer Science. 2018, 45 (5): 147-155.  doi:10.11896/j.issn.1002-137X.2018.05.025
Abstract PDF(1991KB) ( 595 )   
References | Related Articles | Metrics
With the development and maturity of cloud computing technology,software as a service(SaaS)has become the main trend of software application.In multivariate open network of ecological environment,in order to effectively respond to user needs and external environment,SaaS service must possess evolution capacity. Being short of uniform standard and having no established explicit standard to quantitatively measure,the judgment of the consistency checking often overlooks the feel of tenant.In response to these shortages,this paper focused on tenant demand which is the core of SaaS,proposed a description model for SaaS service in aspects of structural layer and non-functional layer.Based on the model,it took full account of the evolutionary needs of tenants and introduced the extent of evolution consistency to analyze evolution consistency quantitatively.After that,it proposed three-tier method of consistency checking.At last,this method was used to judge the evolution consistency combining with SaaS application case,and the feasibility and validity of the method were verified according to the actual application feedback.
Key Techniques of a Kind of Distributed Cache Systems and Their Applications
TU Yao-feng, LIU Hui, ZHANG Guo-liang and LIU Chun
Computer Science. 2018, 45 (5): 156-162.  doi:10.11896/j.issn.1002-137X.2018.05.026
Abstract PDF(5398KB) ( 927 )   
References | Related Articles | Metrics
As a key technique for mass data processing,distributed cache has received much attention and has been widely applied recently.By analyzing the current situation and the deficiencies of distributed cache,this paper proposed an architecture for a kind of distributed cache system.Further,the principles of the key techniques in this architecture were addressed. Based on this,the principles of its key technologies were deeply explained,and a new generation of distributed cache system DCACHE was developed and implemented.Finally,DCACHE was analyzed and validated in RCS(Rich Communication Suite) application.
Design and Implemention of Accessing Hybrid Database Systems Based on Middleware
PAN Ming-ming, LI Ding-ding, TANG Yong and LIU Hai
Computer Science. 2018, 45 (5): 163-167.  doi:10.11896/j.issn.1002-137X.2018.05.027
Abstract PDF(1516KB) ( 1540 )   
References | Related Articles | Metrics
In the big data era,with the explosive growth of informational data,traditional relational databases(SQL) and emerging NoSQL databases are difficult to face these challenges in a comprehensive and efficient manner.Therefore,this paper proposed MingleDB,a high-efficient middleware for incorporating both merits of traditional database and NoSQL systems.MingleDB is transparent for the underlying hybrid database systems,namely SQL and NoSQL,without any intrusive modifications on the original systems.MingleDB can detect the specific characteristic inside a user query,for example,the data is either unstructured or structured.Then it puts this user query into the suitable procedure(SQL or NoSQL) presented by MingleDB,and also provides a lightweight transaction processing framework which is implemented on demand to ensure the final consistency and completeness of hybrid database data.Extensive experiments were conducted to test the effective of MingleDB.Furthermore,this paper used a realistic scenario to verify the advantage of our work.The results are positive.
Recognition Model of Microblog Opinion Leaders Based on Sentiment Orientation Analysis
CHEN Zhi-xiong, WANG Shi-hui and GAO Rong
Computer Science. 2018, 45 (5): 168-175.  doi:10.11896/j.issn.1002-137X.2018.05.028
Abstract PDF(2410KB) ( 864 )   
References | Related Articles | Metrics
The current methods of opinion leaders are numerous.Existing methods include the comprehensive analysis method for user’s personalized features and the analysis method based on the structure of social networks.These methodsonly consider the characteristics of the users,but ignore the interaction between the users,or not take the microblog sentiment factors into consideration.Therefore,this paper proposed a microblog opinion leader identification method based on microblog emotional analysis.First of all,support vector machine is used to do emotional analysis on microblog context based on the synthetically emotional dictionary word frequency statistics results.And then the variation coefficient method is used in microblog attribute weight calculation to reflect the influence of microblog.Finally,the Page-Rank algorithm is employed to predict the diffusion process of users’ influence in the microblog,and the final influence of users is calculated.On the Sina microblog dataset,the proposed model is tested and compared with the other methods based on user attribute analysis and PageRank using the forwarding rate and coverage index.The experimental results show the superiority of proposed method.
P-data Model and Intelligent Acquisition of P-data
XU Feng-sheng, YU Xiu-qing and SHI Kai-quan
Computer Science. 2018, 45 (5): 176-179.  doi:10.11896/j.issn.1002-137X.2018.05.029
Abstract PDF(1215KB) ( 607 )   
References | Related Articles | Metrics
P-data(Packet data) model possessing dynamic characteristics was proposed for the first time by improving P-sets(Packet sets) model in the paper.It is composed of internal P-set(internal Packet data) and outer P-set(outer Packet data),and it is obtained by improving packet sets.P-data model and model structure were given in this paper.The attribute conjunctive normal form of P-data model was got.And then, the model of P-data reasoning and the intelligent acquisition theoryem of data were obtained. Finally,these concepts and results were applied to internal packet data intelligent acquisition and risk estimations.
Multivariate Time Series Classification Based on Shapelets Learning
ZHAO Hui-yun and PAN Zhi-song
Computer Science. 2018, 45 (5): 180-184.  doi:10.11896/j.issn.1002-137X.2018.05.030
Abstract PDF(2661KB) ( 1009 )   
References | Related Articles | Metrics
Multivariate time series data exist in a wide range of real-life domains,and multivariate time series classification is a basic method of obtaining information from time series data.At present,time series classification is suffered from the problem that the similarity measure of time series data is special and the dimension of the original data is high,thus the classification performance of the existing multivariate time series classification methods still need to be improved.This paper presented a multivariate time series classification method based on shapelet learning.At first,this paper established a shapelets learning method under a regularized least squares loss learning framework,and the time series classification method with one dimension based on shapelets is used to classify the vrivariate data of multivariate time series.Then the final resut of the multivariate time series is determined through plurality voting.Experimental results indicate that the proposed method achieves high classification accuracy when processing multivariate time series classification problem.
Pattern Mining of Missing GPS Trajectory Based on Survival Analysis
ZHENG Jian-wei, GU Jing-jing and ZHUANG Yi
Computer Science. 2018, 45 (5): 185-189.  doi:10.11896/j.issn.1002-137X.2018.05.031
Abstract PDF(3202KB) ( 1011 )   
References | Related Articles | Metrics
In recent years,intelligent transportation systems(ITS) has been an effective way to improve the traffic performance of transportation system and enhance the safety of travels.However,with the increase of data size in intelligent transportation system,the problem of data loss becomes increasingly serious.The trajectory data missing caused by vehicle-mounted GPS signal loss is one of the main research subjects.The reasons of GPS data missing are various,and they make the data completion difficult.However,there are few studies on the pattern of missing GPS trajectories.In this paper,based on large amounts of real data on diversification of GPS signal loss,the survival analysis was first applied into data missing field,and a survival analysis-missing trajectory pattern mining(SA-MTPM) model was proposed.The relationship between the length of signal loss and the regression causes of loss was described in the survival function,and the Cox model was used to analyze the key factors of signal loss.This paper performed experiments based on the GPS data of 13666 vehicles in Shanghai Qiangsheng Taxi Company for a month.The experimental results show that these signal loss events can be classified,which provides a further study for big data.
Collaborative Filtering Recommendation Algorithm Based on Difference and Correlation of Users’ Ratings
WANG Jing-song, CAI Zhao-hui, LI Yong-kai and LIU Shu-bo
Computer Science. 2018, 45 (5): 190-195.  doi:10.11896/j.issn.1002-137X.2018.05.032
Abstract PDF(1824KB) ( 717 )   
References | Related Articles | Metrics
The traditional similarity measurement in collaborative filtering mainly pays attention to the similarity between users’ ratings,lacking the consideration of difference of users’ ratings.This paper divided the relationship of users’ ratings into differential part and correlated part,and proposed a similarity measurement based on the difference and the correlation of users’ ratings on the non-sparse dataset.In order to solve the problem that the algorithm’s recommendation is not accurate in spare dataset,this paper improved this algorithm by prefilling the vacancy of rating matrix.Experiment results show that this algorithm can significantly improve the accuracy of recommendation after prefil-ling the rating matrix.
KNN Similarity Graph Algorithm Based on Heap and Neighborhood Coexistence
WANG Ying and YANG Yu-wang
Computer Science. 2018, 45 (5): 196-200.  doi:10.11896/j.issn.1002-137X.2018.05.033
Abstract PDF(6723KB) ( 932 )   
References | Related Articles | Metrics
In the spectral clustering algorithm,the construction of similarity graph is very important,which has a great impact on the clustering results and operation efficiency of algorithm.In order to speed up the operation of spectral clustering and improve the performance by nearest neighbor truncation,K nearest neighbor(KNN) method is usually used to construct the sparse similarity graph.But the K nearest neighbor graph is very sensitive to outliers in the data,and the noise will seriously affect the clustering performance.This paper presented a new efficient sparse affinity graph construction method HCKNN.In this method,the K nearest neighbor search based on heap is more efficient than the nearest neighbor selection process based on sort by log(n),and the sparse similarity matrix reduction based on the neighborhood coexistence cumulative threshold can not only remove the noise to enhance the performance of clustering,but also accelerate the eigenvector decomposition in spectral clustering.This paper proposed a new efficient method HCKNN for constructing sparse affinity graph based on heap and the information of two points in the same neighborhood,which can not only choose the neighbors more efficiently,but also can accelerate the spectral clustering because of the sparse matrix.
Improved MOEA/D Algorithm Based on Self-adaptive Selection Strategy
GENG Huan-tong, DING Yang-yang, ZHOU Li-fa and HAN Wei-min
Computer Science. 2018, 45 (5): 201-207.  doi:10.11896/j.issn.1002-137X.2018.05.034
Abstract PDF(5045KB) ( 925 )   
References | Related Articles | Metrics
Using simple and efficient neighbouring updating as the main selection strategy,MOEA/D focuses more on local replacement.In this case,it ignores the individual’s global fitness and makes algorithm get into local minimum ea-sily.In order to overcome the flaw of MOEA/D in local replacement,an improved MOEA/D which takes the global replacement and local updating into account based on self-adaptive selection strategy(MOEA/D-AS) was proposed.Firstly,a new selection strategy based on perfect matching of bipartite graph(KMS) is applied to select the elite solutions from a global perspective by using matching relationship between sub-problems and indivdual soltuions.Then,using the evolutionary information of the population,a matching disorder judgment mechanism was proposed.At last,on the basis of comprehensive analysis of the advantages of neighbourhood update strategy and KMS,the new algorithm can adaptively choose the most suitable selection strategy by using the disorder judgment mechanism to improve robustness and efficiency.The proposed algorithm was compared with some state-of-the-art versions of MOEA/D on LZ09,DTLZ and CEC09 Benchmarks.The values of Spread and IGD show that MOEA/D-AS has certain advantages than other algorithms in terms of convergence and distribution,and suggest that the self-adaptive selection strategy can effectively guide the selection process of elite solution.
Differential Hybrid Particle Swarm Optimization Algorithm Based on Different Dimensional Variation
LI Jun, LUO Yang-kun, LI Bo and LI Qiao-mu
Computer Science. 2018, 45 (5): 208-214.  doi:10.11896/j.issn.1002-137X.2018.05.035
Abstract PDF(1452KB) ( 773 )   
References | Related Articles | Metrics
Considering the limitations that particle swarm optimization(PSO) algorithm and differential evolution(DE) algorithm are difficult to control the initial distribution of the particles and easily fall into local optimum to reduce the convergence accuracy at later process,a differential hybrid particle swarm optimization algorithm based on the different dimensional variation was proposed. Firstly,in order to improve the diversity of particle swarm,the entropy measure me-thod was introduced to initialize particles.Secondly,during the process of particle iteration, learning strategy of different dimensional variation and dimension factors were adopted to guide the immersed particles to jump out of the local optimum to reach the best solution in a timely manner according to the particle distribution.Finally,this algorithm was simulated on ten typical test functions.The convergence precision and standard deviation show the superior performance of the proposed method on nine testing functions comparing with PSO,DEPSO and CDEPSO.These experiments prove that the algorithm has a strong advantage in convergence accuracy and optimization efficiency.
Properties of Triple I Reasoning Method Based on Fuzzy Soft Set
XUE Bin-bin and QIN Ke-yun
Computer Science. 2018, 45 (5): 215-219.  doi:10.11896/j.issn.1002-137X.2018.05.036
Abstract PDF(1254KB) ( 566 )   
References | Related Articles | Metrics
The reversibility and the continuity of fuzzy soft set based triple I reasoning method were investigated.For fuzzy implication operators which are induced by left continuous t norms,the conditions of reversibility of triple I reasoning method for FSMP was provided.Besides,for Lukasiewicz fuzzy implication operator,Gdel fuzzy implication operator and R0 fuzzy implication operator,the continuity of triple I reasoning method for FSMP was proved.
Multi-class Classification Algorithm for SVM Based on Hybrid Binary Tree Structure
LENG Qiang-kui, LIU Fu-de and QIN Yu-ping
Computer Science. 2018, 45 (5): 220-223.  doi:10.11896/j.issn.1002-137X.2018.05.037
Abstract PDF(2867KB) ( 814 )   
References | Related Articles | Metrics
In order to improve the classification efficiency of mutli-class support vector mechine,a multi-class classification algorithm for support vector machine(SVM) based on hybrid binary tree structure was proposed.In the structure,each internal node corresponds to a partition hyperplane,which is obtained as perpendicular bisectors of linking two centroid segements of the two farthest classes from each other.Each terminal node(i.e.,decision node) is associated with a SVM,whose training set is two sets of samples instead of two centroids.In general,the resulting classification model represents a hybrid form,consisting of hyperplanes and SVMs.The approximate hyperplanes by centroids can provide fast partition in the early stages of the training phase,whereas the SVMs will perform the final precise decision.Experimental results show that compared with the classical multi-class SVM,the proposed algorithm can reduce the computational time and improve the classification efficiency with similar classification accuracy.
Study on New Model for Learning Memory
WU Xiao-jing, SHI Zhen-guo and CHENG Xian-yi
Computer Science. 2018, 45 (5): 224-227.  doi:10.11896/j.issn.1002-137X.2018.05.038
Abstract PDF(1271KB) ( 852 )   
References | Related Articles | Metrics
In the traditional cognitive psychology,a new memory model was proposed on the basis of AS model.This method uses the memory principle of AS model and the mnemonic mechanism of cerebral cortex to detail the process of memory.And then a new memory algorithm on the basis of the model was proposed.Experiments show that the algorithms are practical,effective and feasible,and the mechanism of the algorithm is close to the human thoughts.
Feature Matching Algorithm Based on Visual Feature Constrained Energy Minimization
LIU Zhao-xia, SHAO Feng, JING Yu and QI Rui-hua
Computer Science. 2018, 45 (5): 228-231.  doi:10.11896/j.issn.1002-137X.2018.05.039
Abstract PDF(3571KB) ( 721 )   
References | Related Articles | Metrics
The aerial remote sensing image captured on the sea has the characteristics of monotonous and similar patterns,which may result in mismatches in feature matching.In order to remove the mismatches and simplify the matching process,a novel feature matching algorithm based on the constrained energy minimization of SIFT feature(CEM-SIFT) was proposed.In the algorithm,a finite-impulse response filter is designed and visual information is used to compute the output energy.In the constraints imposed by desired signature,the filter output energy is minimized.Ten pairs of seaice aerial images were utilized to evaluate the performance.The experimental results show that the proposed algorithm CEM-SIFT is more accurate than the euclidean distance of SIFT feature(ED-SIFT) when matching an image with many repetitive features and large point set scale.
Application of Two-stream Faster R-CNN in RGB-D Hand Detection
LIU Zhuang, CHAI Xiu-juan and CHEN Xi-lin
Computer Science. 2018, 45 (5): 232-237.  doi:10.11896/j.issn.1002-137X.2018.05.040
Abstract PDF(3299KB) ( 687 )   
References | Related Articles | Metrics
In most vision tasks related to human hands,such as human computer interaction and sign language recognition,hand detection is a distinctly important preprocessing phase.With the development of RGB-D data acquisition equipment,the extra depth data can complement the color data effectively,so they can provide more powerful feature representation.The traditional detection methods based on hand-crafted features(skin color or HOG) cannot form a well hand representation.While a lot of detection methods based on deep learning can avoid such weakness by learning effective features from data.To combine the advantages of RGB-D data and deep learning,a two-stream Faster R-CNN detection framework was proposed in this paper.The proposed method adds an extra depth stream information,and combines it with RGB stream information in the feature level.The experiment results show that the proposed method can achieve a higher detection precision than the Faster R-CNN framework which uses RGB or fuses the RGB and Depth in the data level.Thus,the proposed method can fuse the color and depth data effectively,and improve the performance of hand detection.
Local Sphere Normalization Embedding:An Improved Scheme for PCANet
LI Xiao-xin, WU Ke-song, QI Pan-pan, ZHOU Xuan and LIU Zhi-yong
Computer Science. 2018, 45 (5): 238-242.  doi:10.11896/j.issn.1002-137X.2018.05.041
Abstract PDF(8213KB) ( 677 )   
References | Related Articles | Metrics
When there is high-level illumination change or occlusion in a facial image,the feature maps produced in PCANet will mainly concentrate on the vicinity of 0 and thus lose redundancy in some extent.In order to enhance the ability of PCANet against nois,a local sphere normalization(LSN) method was proposed and embedded into the convolutional layers of PCANet.LSN embedding helps improves the redundancy of PCANet feature maps.The experiments on UMBDB and AR show that the improved PCANet is more redundant and robust,and has better recognition perfor-mances than the original PCANet.One important finding is that directly imposing LSN on the input image will lead to unstable recognition performance,while the feature redundancy should be improved steadily by LSN.
Fitting Accuracy Guided Registration for Diffusion Weighted Images
ZHANG Wen-hua, ZHANG Ming-hui, GUO Yi-hao, LU Zhen-tai and LIU Ying
Computer Science. 2018, 45 (5): 243-249.  doi:10.11896/j.issn.1002-137X.2018.05.042
Abstract PDF(7099KB) ( 811 )   
References | Related Articles | Metrics
Quantifying analysis of diffusion-weighted images has been widely used in clinical diagnosis.But the misalignments among diffusion weighted images with different b values caused by patients’ respiratory and cardiac motion during scan lead to a reduced diagnostic value.Thus registration is a precondition for precise quantifying analysis.Conventional registration algorithms will cause unwanted distortions in the registration of different b-value images to b0 ima-ge,especially for high-b-value images,due to the signal attenuation among different b-value images and the inhomogeneity in b0 image.This paper proposed a new method that utilizes fitting accuracy to guide free-form deformation(FFD) model for precise registration of multi-b-value diffusion weighted images.Fitting accuracy was obtained by fitting DW images with intra-voxel incoherent motion(IVIM) model and then a weight matrix was constructed with fitting accuracy to adaptively weight step size in FFD for finding the optimal transformation field.The results of 5 series of multi-b-value diffusion weighted images indicate that the proposed method improves the registration property of diffusion weighted images and obtains more accurate IVIM parameters after registration.
Improved Faster RCNN Training Method Based on Hard Negative Mining
AI Tuo, LIANG Ya-ling and DU Ming-hui
Computer Science. 2018, 45 (5): 250-254.  doi:10.11896/j.issn.1002-137X.2018.05.043
Abstract PDF(3583KB) ( 1009 )   
References | Related Articles | Metrics
In the training process of object detection method named Faster RCNN(Faster Region-based Convolutional Neural Network),there is a data imbalance problem which means that training data contains an overwhelming number of negative examples.Aiming at this problem,a discriminant function was proposed to distinguish hard positive examples,which combines location loss and classification loss.Based on this function and hard negative mining,an improved bootstrap sampling method was proposed.Five-step training method was proposed by introducing the bootstrap sampling into traditional Faster RCNN training.Comparing with the traditional training,this method improves network’s generalization ability,reduces false positive rate,and can learn hard example better.The experimental results show that the model trained by five step attains 2.4% higher mAP(mean Average Precision) on Pascal VOC 2007 dataset,reduces false positive by 3.2% on FDDB(Face Detection Data Set and Benchmark) with the same true positive rate,and gets higher fitting degree of boundary box.
Image Inpainting for Object Removal Based on Structure Sparsity and Patch Difference
ZHANG Lei and KANG Bao-sheng
Computer Science. 2018, 45 (5): 255-259.  doi:10.11896/j.issn.1002-137X.2018.05.044
Abstract PDF(3241KB) ( 628 )   
References | Related Articles | Metrics
Aiming at the problems of unreasonable filling order and the mismatch error in the image inpainting for object removal,an image inpainting method based on structure sparsity and patch difference was proposed.Firstly,the structure sparsity of the patch is added in the priority computation,because not only the priority depends on the geometric characteristics of target patch,but also its neighborhood characteristics are reflected,which can improve the identification of the regional characteristics of target patch,so that the filling order is more reasonable.Secondly,the difference between the target patch and the exemplar patch is defined,and the new matching rule is defined on the basis of this.In the new matching rule,it not only measures the similarity degree between existing pixels,but also measures the difference degree between existing pixels and filled pixels,thus effectively preventing mismatch error and error accumulation.Experimental results show that the proposed method can effectively improve the restoration effect,and make the restored images more consistent with the visual consistency requirements.
Automatic Characterization Study of Atherothrombotic Plaques Based on Intravascular Ultrasound Images
HUANG Zhi-jie, WANG Yi-nong and WANG Qing
Computer Science. 2018, 45 (5): 260-265.  doi:10.11896/j.issn.1002-137X.2018.05.045
Abstract PDF(5000KB) ( 1189 )   
References | Related Articles | Metrics
In order to obtain the accurate information of atherothrombotic plaques in the cardiovasculars and assist the diagnosis and classification of the plaque tissues,this study applied apply a machine learning method to automatically characterize the atherothrombotic plaques in intravascular ultrasound(IVUS) grayscale images.In this study,207 plaque samples in the IVUS images were collected from 10 patients with cardiovascular disease in the hospital.Firstly,the size of a sliding patch is determined and then its centre pixel traverses in the plaque area.The values of the mean and entropy are calculated.Ten features of the patch along 4 directions are respectively obtained by using co-occurrence matrix method.Secondly,more texture features of the plaque region in the IVUS images are obtained by using Gabor filter and local binary pattern(LBP) methods.Finally,the classifiers of Liblinear,random forests and Harmonic to Minimum-Ge-neralized LVQ(H2M-GLVQ) are used to classify these pixels in the plaque tissues based on the features obtained through reducing dimension by using principal component analysis(PCA).The manual characterization by an experien-ced physician is considered as the gold standard.Results of the proposed automatic characterization method show the general identification rates of classifiers of random forests and H2M-GLVQ are over 80%.Compared with other two classifiers,the identification rate of random forests is relatively higher,i.e.89.04%,80.23% and 73.77% respectively for fibrotic,lipidic and calcified plaque tissues.
Improvement of 3D Thinning Algorithm Based on Re-checking Procedure
HONG Han-yu, MA Er-wei and HUANG Li-kun
Computer Science. 2018, 45 (5): 266-272.  doi:10.11896/j.issn.1002-137X.2018.05.046
Abstract PDF(9057KB) ( 594 )   
References | Related Articles | Metrics
The existing thinning algorithms based on simple point fail to preserve the connectivity of extracted skeleton.This paper first proposed a set of isotropic deleting templates which keeps the algorithm have 90°rotation invariance,and then proposed a new re-checking procedure by detecting weather the connectivity of target point’s 26-neighborhood have changed or not after deleting points to determine whether the target should be reduced,thus the connectnity of 3D objects can be detected by the point.This method can suit most of the thinning algorithms based on simple point and fix the cavities to preserve topology structure.The new proposed algorithm can get the best results when compared with other algorithms based on templates in rotation invariance test.
Fuzzy Edge Detection Algorithm Based on RPCA
LI Shan-shan, CHEN Li, ZHANG Yong-xin and YUAN Ya-ting
Computer Science. 2018, 45 (5): 273-279.  doi:10.11896/j.issn.1002-137X.2018.05.047
Abstract PDF(14132KB) ( 785 )   
References | Related Articles | Metrics
The traditional edge detection methods fail to achieve a good compromise between the anti-noise performance and the edge detection accuracy.Aiming at this problem,utilizing the effective matrix recovery capability of the robust principal component analysis model and the superior edge detection performance of fuzzy edge detection algorithm,combining the robust principal component analysis model with the fuzzy edge detection algorithm,this paper proposed a fuzzy edge detection algorithm based on robust principal component analysis,which formulates the problem of image edge detection as the edge detection of the image principal component.The steps of this approach can be summarized as follows.Firstly,the noisy image is decomposed into a sparse image and a low rank image by RPCA.Secondly,in order to extract the fuzzy property plane from the spatial domain for the low rank image,a threshold-based membership function is defined.Thirdly,image enhancement is performed in the fuzzy domain by using fuzzy enhancement operator.Finally,the modified spatial domain is obtained and the edge detection is excuted by using min or max operators.The experiment results demonstrate that the new approach can effectively suppress the different types and different intensity of noise and improve the accuracy of edge localization,which is suitable for low level image processing with lower demand in real-time.
Vehicle Type Recognition Based on Deep Convolution Neural Network
SHI Lei, WANG Ya-min, CAO Yang-jie and WEI Lin
Computer Science. 2018, 45 (5): 280-284.  doi:10.11896/j.issn.1002-137X.2018.05.048
Abstract PDF(1505KB) ( 941 )   
References | Related Articles | Metrics
The accuracy of traditional convolution neural network recognizing the vehicle model is not high when recognizing similar models,and the gray scale of image can only be used in the network training with the loss of color information of the image.Based on this,a method of extracting image features based on deep convolution neural network(DCNN) was proposed.The deep convolution neural network is used to carry out network training for the vehicle model with complex background,so as to achieve the purpose of recognizing models.In this paper,by using advanced deep learning framework Caffe,a deep convolution neural network based on AlexNet structure was proposed with the trai-ning of the image of vehicle model and the comparison with traditional convolution neural network.The experimental results show that the accuracy rate of DCNN network model can reach 96.9% with a higher accuracy.
Improved Algorithm of Haze Removal Based on Guided Filtering and Dark Channel Prior
CUI Qian-nan, TIAN Xiao-ping and WU Cheng-mao
Computer Science. 2018, 45 (5): 285-290.  doi:10.11896/j.issn.1002-137X.2018.05.049
Abstract PDF(6357KB) ( 954 )   
References | Related Articles | Metrics
Aiming at the bad influence on outdoor images in fog weather such as decrease of definition and color deviation problem,an improved algorithm of haze removal based on guided filtering and dark channel prior was proposed.Firstly,the rough transmittance is estimated through the dark channel prior,and then the guided filter is adopted to refine the transmission by using the grayscale image as the guidance images.Next,the failure areas of dark channel prior are detected and adjustment factor is used to modify the transmittance.At last,the restored image is obtained from atmospheric scattering model,and the HSI color space intensity equalization method is used to enhance the restored ima-ge.Experimental results show that the proposed algorithm can provide more detailed restored image with natural color effect and improve operational efficiency.
Implementation and Optimization of Historical VaR on GPU
ZHANG Jie, WEN Min-hua, Jame LIN, MENG De-long and LU Hao
Computer Science. 2018, 45 (5): 291-294.  doi:10.11896/j.issn.1002-137X.2018.05.050
Abstract PDF(1853KB) ( 983 )   
References | Related Articles | Metrics
Value at Risk(VaR) is a fundamental tool for risk management,providing a quantitative measure of the downside risks to existing positions.Historical VaR is one of the most popular methods of VaR calculation,which is widely used in many financial institutions in the world.Real time or quasi-real-time VaR calculation for financial pro-ducts is quite important to avoid financial risks in time.Due to the increasing complexity and increasing number of financial products,the computational capability of existing CPU platforms has not been able to meet the requirements of risk management for computing speed.To solve this problem,the historical VaR method was implemented and optimized on GPU by using CUDA.By improving the sorting algorithm,overlapping the communication time using Multi-stream,decoupling the data dependency and implementing the fine-grained parallel optimization method,42.6x speedup is achieved when it is compared with the performance of the single core CPU,which provides solution for fast VaR calculation of large amount of bonds.The optimization methods of this paper can also be referenced by other financial algorithms on GPU.
Evaluation Method of Big Data Service Resources Based on Cloud Computing
YANG Xiao-lan, QIAN Cheng and ZHU Fu-xi
Computer Science. 2018, 45 (5): 295-299.  doi:10.11896/j.issn.1002-137X.2018.05.051
Abstract PDF(1979KB) ( 943 )   
References | Related Articles | Metrics
With the introduction of cloud computing technology in the field of big data services,the traditional QoS-based weighted evaluation methods can not evaluate the validity and accuracy of cloud computing service resources dynamically due to the large number of cloud service resources which need to be mobilized and their complex topological structures.In order to solve this problem,this paper proposed a screening weighted evaluation method based on game optimization scheduling.This method introduces user’s QoE evaluation index,fully considers the service and delay cha-racteristics of dynamic scheduling,and gets the Nash equilibrium point of weighted evaluation parameters through the game of multiple indexes.The simulation results show that the proposed evaluation method can accurately evaluate the validity and accuracy of cloud computing service resource scheduling and is suitable for the expansion of big data service business.
Improved Morphological-wavelet Threshold De-noising Method
YANG Zheng-yi, LIU Bo-wen, REN Shan and HENG Nan-nan
Computer Science. 2018, 45 (5): 300-302.  doi:10.11896/j.issn.1002-137X.2018.05.052
Abstract PDF(1894KB) ( 837 )   
References | Related Articles | Metrics
Vibration signals of rotating machinery acquired on the field are usually accompanied with impulse and white noise.Wavelet threshold de-noising is effective for filtering white noise but ineffective in filtering impulse noise,and morphological filter can effectively remove impulse noise but it is difficult for filtering white noise.An integrated filter based on morphological de-noising and wavelet threshold de-noising was proposed to efficiently purify the corrupted vibration signals of rotating machinery.The proposed filter algorithm presents the combined advantages of both morphological and wavelet filters,and it can simultaneously filter both impulse and white noises in the rotor vibration signal.Based on the vibration signals of rotating machinery acquired in field and simulation results,the integrated filter which is composed of a morphological filter followed by an improved wavelet threshold filter presents a decent de-noising result,and extracts the rotor vibration signal submerged in the noise.The efficiency of the filter is experimentally demonstrated compared with the previously reported filters.
Load Balancing Strategy of MapReduce Clustering Based on Index Shift
ZHOU Hua-ping, LIU Guang-zong and ZHANG Bei-bei
Computer Science. 2018, 45 (5): 303-309.  doi:10.11896/j.issn.1002-137X.2018.05.053
Abstract PDF(2657KB) ( 825 )   
References | Related Articles | Metrics
MapReduce has been widely used in large-scale and high-dimension datasets as a kind of distributed programming model.Original Hash partition function in MapReduce often occurs data skew when data distribution is not uniform.In the clustering algorithm based on MapReduce,existing solutions for data skew are not applicable because the input data distribution of Reduce is unclear at each stage of multiple iteration.To solve the imbalance problem of data partitioning,this paper proposed a strategy to change the remaining partition index when data is tilted.In Map phase,the amount of data which will be distributed to each reducer is counted,then the global partition information is monitored and the original partition function is dynamically modified according to the data skew model by JobTrackcr,so the Partitioner can change the index of these partitions which will cause data skew to the other reducer that has less load in the next partitioning process,and eventually balance the load of each node.Finally,this method was compared with exi-sting methods on both synthetic datasets and real datasets.The experimental results show this strategy can solve data skew on MapReduce clustering with better stability and efficiency than Hash method and dynamic partitioning method based on sampling.
Design and Simulation of Intelligent Control Algorithm for Quad-rotors under Wind Disturbance
XIAO Chang-shi, MAO Yi-han, YUAN Hai-wen and WEN Yuan-qiao
Computer Science. 2018, 45 (5): 310-316.  doi:10.11896/j.issn.1002-137X.2018.05.054
Abstract PDF(4710KB) ( 1650 )   
References | Related Articles | Metrics
In order to improve the anti-interference ability and control accuracy of quad-rotor UAV under the condition of wind disturbance,a method of optimizing fuzzy controller rules was proposed to improve the performance of the controller by using genetic algorithm.Firstly,the characteristics of the wind speed change under natural conditions were analyzed to establish a corresponding mathematical model and they were introduced into the system as environmental noise.Based on the dynamic model of quad-rotor UAV,a fuzzy PID controller was designed to control it.At the same time,the fuzzy subsets in the fuzzy controller were genetically coded,and an improved genetic algorithm was designed to realize the readjustment and optimization of fuzzy rules.The experimental results in the Matlab/Simulink simulation environment show that the algorithm effectively improves the anti-interference ability and control accuracy of the quad-rotor UAV in the face of complex interference.
Availability Analyzing of Virtual Cluster Nodes Based on State Transition Diagram
CHE Jian-hua, REN Shou-gang, YU Yong and XU Huan-liang
Computer Science. 2018, 45 (5): 317-321.  doi:10.11896/j.issn.1002-137X.2018.05.055
Abstract PDF(1721KB) ( 777 )   
References | Related Articles | Metrics
Aiming at the availability evaluation of virtual cluster nodes,a state transition diagram-based virtual cluster node availability model was proposed.First,five lifecycle states of virtual cluster nodes are established by analyzing the deployment mode and running process of virtual cluster nodes,and the transition relations between five lifecycle states are expounded.Then,the state transition diagrams of a virtual cluster node without a standby node and with a standby node are respectively presented,and the corresponding availability models of virtual cluster nodes are proposed based on the above state transition diagrams.Many numerical simulation experiments were conducted after setting the parameter values of the proposed availability models according to the running log of a real virtual cluster system.The experimental results show that the analytical results of the proposed availability models are in accordance with the real availability level and they are able to reflect the availability law of virtual cluster nodes properly.