Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 46 Issue 1, 15 January 2019
  
Surveys
State-of-the-art Analysis and Perspectives of 2018 China HPC Development
ZHANG Yun-quan
Computer Science. 2019, 46 (1): 1-5.  doi:10.11896/j.issn.1002-137X.2019.01.001
Abstract PDF(2068KB) ( 2150 )   
References | Related Articles | Metrics
Based on the data of China’s high performance computer TOP100 rankings published in November 2018,this paper made an in-depth analysis of the current development status of high performance computers in China from the overall performance,manufacturer,industry and other aspects.The average Linpack performance of TOP100 in China continues to be higher than that of the international TOP500,and the threshold for entry performance of TOP100 still exceeds that of TOP500.China’s supercomputing system on TOP100 has almost all been a domestic supercomputer system,and the Shuguang and Lenovo have become the champion on the number of systems on Top100.The situation of the three strong hegemony of Shuguang,Lenovo and Inspur continues to be maintained and strengthened.On the basis of this,according to the performance data of the seventeenth ranking list,this paper analyzed and predicted the development trend of high-performance computers in mainland China in the future.According to the new data,we believe that machines with peak Exa ops will appear between 2018 and 2019;machines with peaks of 10 Exa ops will appear between 2022 and 2023;machines with peaks of 100 Exa ops will appear between 2024 and 2025.
Research on Multi-keyword Ranked Search over Encrypted Cloud Data
DAI Hua, LI Xiao, ZHU Xiang-yang, YANG Geng, YI Xun
Computer Science. 2019, 46 (1): 6-12.  doi:10.11896/j.issn.1002-137X.2019.01.002
Abstract PDF(1427KB) ( 1498 )   
References | Related Articles | Metrics
With the extensive development of cloud computing,storage and/or computing outsourcing services are becoming more and more acceptable nowadays.To protect the privacy of outsourced data,the privacy-preserving multi-keyword ranked search scheme over encrypted cloud data is focused by researches,which turns to be a hot spot recently.This paper introduced the system model and threat model of existing work,and gave the problem description about privacy-preserving,search efficiency and accuracy,search result completeness,etc.Typical works and extended research about multi-keyword ranked search were studied,and the main ideas of those methods were discussed in detail.Finally,the conclusions about current works were given,and the future research directions were proposed simultaneously.
Survey of Content Centric Network Based on SDN
YANG Ren-yu, HAN Yi-gang, ZHANG Fan, FENG Fei
Computer Science. 2019, 46 (1): 13-20.  doi:10.11896/j.issn.1002-137X.2019.01.003
Abstract PDF(1345KB) ( 1934 )   
References | Related Articles | Metrics
The practical deployment of Content Centric Network (CCN) is confronted with numerous challenges.On the other hand,Software Defined Networking (SDN) has been developing rapidly and its features of open,programmable and centralized control bring a new direction for CCN.Accordingly,the realization of CCN based on SDN has gradually attracted attention.This paper summarized the background knowledge,concluded the key issues in the deployment of SDN-based CCN,and analyzed the advantages and difficulties of their integration.Then,this paper introduced the research status ,divided the existing integrated network schemes into purely centralized schemes and semi-centralized schemes,evaluated the representative design of the scheme,and summed up the characteristics of each kind of schemes by comparison.Finally,this paper presented the topics for future researches.
Review of Time Series Prediction Methods
YANG Hai-min, PAN Zhi-song, BAI Wei
Computer Science. 2019, 46 (1): 21-28.  doi:10.11896/j.issn.1002-137X.2019.01.004
Abstract PDF(1294KB) ( 12362 )   
References | Related Articles | Metrics
Time series is a set of random variables ordered in timestamp.It is often the observation of an underlying process,in which values are collected from uniformly spaced time instants,according to a given sampling rate.Time series data essentially reflects the trend that one or some random variables change with time.The core of time series prediction is mining the rule from data and making use of it to estimate future data.This paper emphatically introduced a summary of time series prediction method,namely the traditional time series prediction method,machine learning based time series prediction method and online time series prediction method based on parameter model,andfurther prospected the future research direction.
Research on Time Series Classification Based on Shapelet
YAN Wen-he, LI Gui-ling
Computer Science. 2019, 46 (1): 29-35.  doi:10.11896/j.issn.1002-137X.2019.01.005
Abstract PDF(1483KB) ( 2210 )   
References | Related Articles | Metrics
Time series is high-dimensional real-value data changing with time order,and it appears extensively in the fields of medicine,finance,monitoring and others.Because the accuracy of conventional classification algorithms is not ideal for the time series and it doesn’t possess the characteristic of interpretability,and shapelet is a discriminative continuous time-series subsequence,the time series classification based on shapelet has become one of the hot spots in the researches on time series classification.First,through analyzing the existing time series shapelet discovery methods,this paper classified them into two catalogues,namely shapelet discovery from shapelet candidates and learning shapelet by optimizing object function,and introduced the application of shapelet.Then,according to the classification object,this paper emphasized the univariate time series classification algorithms and multivariate time series classification algorithms based on shapelet.Finally,this paper pointed out the further research direction of time series classification based on shapelet.
Application Status and Development Trends of Cardiac Magnetic Resonance Fast Imaging Based on Compressed Sensing Theory
HENG Yang, CHEN Feng, XU Jian-feng, TANG Min
Computer Science. 2019, 46 (1): 36-44.  doi:10.11896/j.issn.1002-137X.2019.01.006
Abstract PDF(3112KB) ( 1345 )   
References | Related Articles | Metrics
Cardiac Magnetic Resonance (CMR) has several shortcomings in practical application,such as slow imaging speed and inevitable artifacts.Compressed Sensing (CS) is applied to CMR to make full use of the redundancy of K space information,and the images are reconstructed from partial K space data to reduce artifacts and ensure image accuracy.This paper summarized a review according to the domestic and foreign literatures published in recent three years.Firstly,this paper described the current situation of CMR,the commonly used sequences,sampling mask and the compressed sensing theory,respectively.Then,it provided the latest fruits and applications of CMR with an introduction to objective quantitative indices and research progress of the authors in the CS-CMR field.Finally,it concluded the shortcomings of current researches and analyzed the further research trends.
CCDM2018
Uncertainty Measure of Rough Fuzzy Sets in Hierarchical Granular Structure
YANG Jie, WANG Guo-yin, ZHANG Qing-hua, FENG Lin
Computer Science. 2019, 46 (1): 45-50.  doi:10.11896/j.issn.1002-137X.2019.01.007
Abstract PDF(1855KB) ( 898 )   
References | Related Articles | Metrics
There has been a consensus that the uncertainty of Pawlak’s rough sets model is rooted in the objects contained in the boundary region of the target concept,while the uncertainty of rough fuzzy sets results from three regions,because the objects in the positive or negative regions are probably uncertain.Moreover,in rough fuzzy sets model,a fuzzy concept can be characterized by different rough approximation spaces in a hierarchical granular structure,so how will the uncertainty of a fuzzy concept change with granularity? This paper firstly proposed a fuzziness-based uncertainty measure,analyzed the rough fuzzy set model through the average fuzzy sets and drew a conclusion,that is the uncertainty measure for rough fuzzy sets is also suitable for probabilistic rough sets.Based on the fuzziness-based uncertainty measure,this paper revealed the change rules oftheir uncertainty of rough fuzzy sets in a hierarchical granular structure.Then,it discussed the uncertainties of the three regions (positive region,boundary region and negative region) and revealed the change rules of their uncertainty in a hierarchical granular structure.Finally,experimental results demonstrate the effectiveness of the proposed uncertainty measure theory.
Network Dimension:A New Measure for Complex Networks
LIU Sheng-jiu, LI Tian-rui, LIU Xiao-wei
Computer Science. 2019, 46 (1): 51-56.  doi:10.11896/j.issn.1002-137X.2019.01.008
Abstract PDF(1467KB) ( 1684 )   
References | Related Articles | Metrics
How to measure complex networks has always received much attention.This paper proposed a new method based on the analysis of fractal dimension of self-similarity complex networks,named network dimension,to measure complex networks.Network dimension is expressed as the division of logarithm of the sum of edges’ weights and logarithm of the sum of nodes’ weights of complex networks.The weights of both edge and node are extended to real and complex number fields.The calculation methods of network dimensions of weighted networks with different types of weights were presented.Finally,several representative classical complex network models were taken as examples to discuss some properties of the proposed network dimension.
Adversarial Multi-armed Bandit Model with Online Kernel Selection
LI Jun-fan, LIAO Shi-zhong
Computer Science. 2019, 46 (1): 57-63.  doi:10.11896/j.issn.1002-137X.2019.01.009
Abstract PDF(1253KB) ( 1053 )   
References | Related Articles | Metrics
Online kernel selection is an important component of online kernel methods,and it can be classified into three categories,that is,the filter,the wrapper and the embedder.Existing online kernel selection explores the wrapper and the embedder categories,and empirically adopts the filter approach.But there have been no unified frameworks yet for comparing,analyzing and investigating online kernel selection problems.This paper proposed a unified framework for online kernel selection researches via multi-armed bandits,which can model the wrapper and the embedder of online kernel selection simultaneously.Giving a set of candidate kernels,this paper corresponds each kernel to an arm in an adversarial bandit model.At each round of online kernel selection,this paper randomly chose multiple kernels according to a probability distribution,and updated the probability distribution via the exponentially weighted average method.In this way,an online kernel selection problem was reduced to an adversarial bandit problem in a non-oblivious adversary setting,and a unified framework was developed for online kernel selection researches,which can model the wrapper and the embedder uniformly.This paper further defined a new regret concept of online kernel selection,and proved that the wrapper within the framework enjoys a sub-linear weak expected regret bound and the embedder within the framework enjoys a sub-linear expected regret bound.Experimental results on benchmark datasets demonstrate the effectiveness of the proposed unified framework.
Multi-source Online Transfer Learning Algorithm for Classification of Data Streams with Concept Drift
QIN Yi-xiu, WEN Yi-min, HE Qian
Computer Science. 2019, 46 (1): 64-72.  doi:10.11896/j.issn.1002-137X.2019.01.010
Abstract PDF(3631KB) ( 1195 )   
References | Related Articles | Metrics
The existing algorithms for classification of data streams with concept drift always train a new classifier on new collected data when new concept is detected,and forget the historical models.This strategy always lead to insufficient training of classifier in a short time,because the training data for the new concept are always not collected enough in initial stage.And further,some existing online transfer learning algorithms for classification of data streams with concept drift only take advantage of single source domain,which sometimes lead to poor classification accuracy when the historical concepts are different with the new concept.Aiming to solve these problems above,this paper proposed a multi-source online transfer learning algorithms for classification of data stream with concept drift (CMOL),which can utilize the knowledges from multiple historical classifiers.The CMOL algorithm adopts a dynamic classifier weight adjustment mechanism and updates classifier pool according to the weights of classifiers in it.Experiments validate that CMOL can adapt to new concept faster than other corresponding methods when concept drift occurs,and get higher classification accuracy.
Image Retrieval Algorithm Based on Transfer Learning
LI Xiao-yu, NIE Xiu-shan, CUI Chao-ran, JIAN Mu-wei, YIN Yi-long
Computer Science. 2019, 46 (1): 73-77.  doi:10.11896/j.issn.1002-137X.2019.01.011
Abstract PDF(1560KB) ( 1015 )   
References | Related Articles | Metrics
In recent years,with the development of the Internet and the popularity of smart devices,the number of online store image is explosively growing.At the same time,the number of users who use different types of social networks and media continues to grow.In this case,the multimedia data type that the user uploaded to the network also has changed,the image uploaded by the user contains the visual information that is carried by the image itself,and also contains the label information and text information that the user sets for it.Therefore,how to provide fast and accurate image retrieval results to users is a new challenge in the field of multimedia retrieval.This paper proposed an image retrieval algorithm based on transfer learning.It learns the visual information and the text information at the same time,then migrates the results learnt to the visual information domain,and thus the feature contains cross modal information.Experimental results show that the proposed algorithm can achieve better image retrieval results.
Confidence Interval Method for Classification Usability Evaluation of Data Sets
TAN Xun-tao, GU Yi-yi, RUAN Tong, YUAN Yu-bo
Computer Science. 2019, 46 (1): 78-85.  doi:10.11896/j.issn.1002-137X.2019.01.012
Abstract PDF(2175KB) ( 1869 )   
References | Related Articles | Metrics
It is always a difficult problem to evaluate the usability of training data sets effectively,which hinders the application of intelligent classification systems.Aiming at the issue of data classification in the field of machine learning,based on interval analysis and information granulation,this paper proposed an evaluation method of data classification usability to measure the separability of data sets.In this method,dataset is defined as the classification information system,and the concept of classification confidence interval is put forward,then the information granulation is carried out by interval analysis.Under this information granulation strategy,this paper defined the mathematical model of classification usability,and further gave the calculation method of the classification usability for single attribute and the total data set.In this paper,18 UCI standard data sets were selected as evaluation objects,the evaluation results of classification usability were given,and 3 classifiers were selected to classify the above data sets.Finally,the effectiveness and feasibility of this evaluation method are verified by the analysis of experimental results.
Image Restoration Method Based on Improved Inverse Filtering for Diffractive Optic Imaging Spectrometer
ZHANG Ming-qi, CAO Guo, CHEN Qiang, SUN Quan-sen
Computer Science. 2019, 46 (1): 86-93.  doi:10.11896/j.issn.1002-137X.2019.01.013
Abstract PDF(6764KB) ( 1010 )   
References | Related Articles | Metrics
In order to solve the image-blurring problem caused by the interference from out-of-focus optical images in in-focus image within a diffractive optic imaging spectrometer (DOIS),an improved inverse filtering restoration method was proposed to solve the ill-posed problem in inverse filtering and restore the diffraction spectrum image.This method changes the solution of the primal problem by introducing a regularization matrix to regularize the inverse filtering function,thus suppressing the influences of noises on restored images.It achieves the purpose of reducing morbidity of the matrix and obtaining a better restoration result through the following three procedures:convert the image restoration process into a process of matrix inversion,add a regular filter to the SVD (singular value decomposition) method,and adjust the form of the regularization matrix and the values of parameters.Experiments show that the improved inverse filtering method is effective for restoring the spectral images formed with a diffractive optic imaging spectrometer.It can not only increase the Laplacian Gradient and QI(Quality Index) value,but also reduce RMSE(Root-Mean-Square Error) value to a certain extent.In the meantime,this method can suppress the noise interferences of the blurred images,enhance the image clarity,restore a single spectrum image with a higher similarity to the reference image,and obtain better spectral curves to analyze the geomorphological features.
Sample Adaptive Classifier for Imbalanced Data
CAI Zi-xin, WANG Xin-yue, XU Jian, JING Li-ping
Computer Science. 2019, 46 (1): 94-99.  doi:10.11896/j.issn.1002-137X.2019.01.014
Abstract PDF(1347KB) ( 1175 )   
References | Related Articles | Metrics
In the era of big data,the imbalanced data is ubiquitous and inevitable,which has been a critical classification issue.Taking binary classification as an example,traditional learning algorithms can not sufficiently learn the hidden patterns from the minority class and may be biased towards majority class.To solve this problem,an effective way is using the cost-sensitive learning to improve the performance of prediction for the minority class which assigns ahighercost to misclassification of the minority.However,these methods equally treat the instances within one class.Actually,different instances may make different contributions to learning process.In order to make the cost-sensitive learning more effective,this paper proposed a sample-adaptive and cost-sensitive strategy for the classification of imbalanced data,which assigns a different weight to every single instance if misclassification occurs.Firstly,the strategy determines the distances between the boundary and instances according to the local distribution of the instances.Then,it assigns higher weights to the instances nearer to the boundary on the top of the margin theory.In this paper,the proposed strategy was applied to the classical LDM method.And a series of experiments on the UCI datasets prove that the sample-adaptive and cost-sensitive strategy can effectively improve the classifier’s performance on imbalanced data classification.
Optimized Selection Method of Cycle-consistent Loss Coefficient of CycleGAN in Image Generation with Different Texture Complexity
XU Qiang, ZHONG Shang-ping, CHEN Kai-zhi, ZHANG Chun-yang
Computer Science. 2019, 46 (1): 100-106.  doi:10.11896/j.issn.1002-137X.2019.01.015
Abstract PDF(4469KB) ( 1739 )   
References | Related Articles | Metrics
High-quality image generation has always been a difficult and hot topic in the field of computer vision and other exploration.CycleGAN achieves good results in unsupervised image generation tasks by using cycle-consistent losses.However,in face of image generation tasks with different texture complexity,CycleGAN’s cycle-consistent loss coefficient is unchanged by default,and its generated images have weak points such as texture distortion or even disappear,which can not guarantee the quality of generated images.In this paper,the complexity of image texture was mea-sured by integrating the spatial dimension and time dimension of images,the importance of cycle-consistent loss function in optimizing objective function was clarified,the correlation between the size of the cycle-consistent loss coefficient and the quality of image with different texture complexity was discovered and explained.The higher the texture complexity,the larger the cycle-consistent loss coefficient should be selected.Otherwise,the smaller coefficient should be taken.Using benchmarks and self-acquired image data sets,the classification accuracy based on migration learning was introduced to generate image quality assessment indicators.The experimental results show that the optimal choice of the appropriate cycle-consistent loss factor can effectively improve the quality of generated images.
Edge Bundling Method of Spiral Graph Based on Interval Classification
ZHU Li-xia, LI Tian-rui, TENG Fei, PENG Bo
Computer Science. 2019, 46 (1): 107-111.  doi:10.11896/j.issn.1002-137X.2019.01.016
Abstract PDF(2753KB) ( 989 )   
References | Related Articles | Metrics
Spiral graph is a common visualization method in visualizing time series data.It can not only simultaneous display the multiple-stages data in one plane space,but also demonstrate the data with different time length in a limited space.In order to solve the problem of visual clutter caused by the intersection of helical lines in the present spiral image visualization methods,a method of edge bundling is of great significance.First,the data points on the state circle are classified.Then the virtual bundling circles are set between the adjacent state circles,and the data points on the state ring are mapped to the corresponding virtual bundling circle by the function of edge bundling.Finally,in order to achieve the effect of curve bundling,the Bézier curve is drawn between the state circle and its corresponding virtual bundling circle,and the spiral curve is drawn between the virtual bundling circle and the virtual bundling circle.Experimental results show that the edge-bundling algorithm is effective for large-scale data visualization and can effectively alleviate the problem of visual clutter.
Migration Optimization Algorithm Based on State Transition and Fuzzy Thinking
ZHONG Da-jian, FENG Xiang, YU Hui-qun
Computer Science. 2019, 46 (1): 112-116.  doi:10.11896/j.issn.1002-137X.2019.01.017
Abstract PDF(2051KB) ( 780 )   
References | Related Articles | Metrics
Inspired by the existing animal migration optimization algorithm (AMO),a novel migration optimization algorithm based on state transition and fuzzy thinking (SMO) was proposed for solving global optimization problems.In the proposed algorithm,the state model and fuzzy opposite model are constructed.Firstly,the state model describes the distribution of the whole group with two states:the dispersed state and the centralized state.In the dispersed state,the whole group is distributed in the solution space randomly and a probabilistic decision-making method is used to search the solution space.It’s the process of exploration.As the individuals learning from each other,the differences between individuals become smaller and smaller,and the state of the group changes into the centralized state.Meanwhile,a step based searching strategy is used to find the optimal value.It’s the process of exploitation.Therefore,the balance between exploration and exploitation can be obtained by using different searching strategies according to the state of the group.Secondly,the algorithm uses a fuzzy opposite model.It can make full use of the fuzzy opposite position of indivi-duals and increase the diversity of species.Moreover,it can improve the convergence precision of the algorithm.Then,the convergence of the algorithm is proved theoretically,and twelve benchmark functions are used to verify the perfor-mance of the proposed algorithm.Finally,the algorithm is compared with three other optimization algorithms.Experimental results attest to the effectiveness of SMO.
Network Representation Learning Based on Multi-view Ensemble Algorithm
YE Zhong-lin, ZHAO Hai-xing, ZHANG Ke, ZHU Yu
Computer Science. 2019, 46 (1): 117-125.  doi:10.11896/j.issn.1002-137X.2019.01.018
Abstract PDF(2517KB) ( 1074 )   
References | Related Articles | Metrics
The existing network representation learning algorithms mainly consist of the methods based on the shallow neural network and the approaches based on neural matrix factorization.It has been proved that network representation learning based on shallow neural network is to factorize feature matrix of network structure.In addition,most of the existing network representation algorithms learn the features from the structure information,which is a single view representation learning for networks.However,there are various kinds of views in the network.Therefore,this paper proposed a network representation learning approach based on multi-view ensemble (MVENR).The algorithm abandons the neural network training process and integrates the idea of matrix information ensemble and factorization into the network representation vectors.MVENR gives effective combination strategy between the network structure view.The link weight view and the node attribute view.Meanwhile,it makes up the shortage of neglecting the network link weight,and solves the sparse network feature problem for using single view training.The experimental results show that the proposed algorithm outperforms the commonly joint learning algorithms and the methods purely based on network structure features,and it is a simple and efficient network representation learning algorithm.
Hybrid Recommendation Algorithm Based on Deep Learning
ZENG Xu-yu, YANG Yan, WANG Shu-ying, HE Tai-jun, CHEN Jian-bo
Computer Science. 2019, 46 (1): 126-130.  doi:10.11896/j.issn.1002-137X.2019.01.019
Abstract PDF(1505KB) ( 1327 )   
References | Related Articles | Metrics
Recommendation system is playing an increasingly indispensable role in the development of e-commerce,but the sparsity of user’s rating data for the items in the recommendation system is often an important reason for the low recommendation accuracy.At present,the recommendation technology is usually used to process the auxiliary information to alleviate the sparsity of the user evaluation and improve the accuracy of the prediction score.Text data can be used to extract the hidden features of the item through related models.In recent years,the deep learning algorithm has developed rapidly.Therefore,this paper chose a variational autoencoder(VAE),which is a new type of network structure with powerful feature extraction capabilities.This paper proposed a novel context-aware recommendation model integrating the unsupervised method VAE into the variable matrix factorization (VAEMF) in the probability matrix factorization (PMF).Firstly,TD-IDF is used to preprocess the evaluation documents of the item.Then,the VAE is utilized to capture the context information features of the item.Finally,the probability matrix factorization is used to improve the accuracy of the prediction score.The experimental results on two real data sets show that this method is superior to the autoencoder and the probability matrix factorization recommendation methods.
Approach for Knowledge Reasoning Based on Hesitate Fuzzy Credibility
ZHENG Hong-liang, HOU Xue-hui, SONG Xiao-ying, PANG Kuo, ZOU Li
Computer Science. 2019, 46 (1): 131-137.  doi:10.11896/j.issn.1002-137X.2019.01.020
Abstract PDF(1262KB) ( 1240 )   
References | Related Articles | Metrics
In order to solve the problem of inaccurate estimation of credibility in uncertainty reasoning,hesitant fuzzy set was introduced into the reasoning of uncertainty in this paper.The concept of hesitant fuzzy credibility was given in this article.On the basis of the knowledge representation based on credibility,the method of knowledge representation of hesitant fuzzy credibility is defined.In order to solve the problem of missing information in the process of reasoning by experts,an information complement method for solving the average value was proposed.A single rule and multiple rules of parallel relationship of the hesitant fuzzy credibility were constructed,and the specific steps of knowledge representation and uncertainty reasoning based on the hesitant fuzzy credibility was given.Finally,an example was given to illustrate the effectiveness and feasibility of the proposed method.
Network & Communication
Multiuser Detection Scheme for SCMA Systems Based on Stability of Belief Propagation
LI Mao, ZHOU Zhi-gang, WANG Tao
Computer Science. 2019, 46 (1): 138-142.  doi:10.11896/j.issn.1002-137X.2019.01.021
Abstract PDF(1849KB) ( 1027 )   
References | Related Articles | Metrics
The main feature of sparse code multiple access,i.e.,non-orthogonal multiple access,is supported by overloaded connection with limited resources,which can greatly improve the spectrum utilization.Thanks to the sparsity of the SCMA codebook sets,MPA becomes a basic receiver decoding algorithm.Although there exists a similar bit error ratio (BER) performance between the maximum likelihood (ML) detection scheme and traditional MAP method,the complexity of the exponential calculation is still high.To further reduce the complexity problem,a novel low-complexity detection algorithm based on dynamic edge selection strategy was proposed to reduce unnecessary node operation.In each iteration,the belief propagation stability information of the function node to the variable node in the factor graph model is used to dynamically determine whether a node update operation is required.The simulation results show that the complexity of the dynamic edge selection algorithm is significantly reduced,and the BER can be well balanced.
Intra-domain Routing Protection Scheme by Optimizing Link Weight
GENG Hai-jun
Computer Science. 2019, 46 (1): 143-147.  doi:10.11896/j.issn.1002-137X.2019.01.022
Abstract PDF(1704KB) ( 857 )   
References | Related Articles | Metrics
The current deployed intra-domain link state routing protocols on the Internet,such as Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS),generally adopt proactive recovery schemes to cope with network failures.In addition,with the emergence of real-time and mission-critical applications,stringent reliability and high availability are required.However,the deployed intra-domain link-state routing protocols need a global link-state advertisement and need to recalculate routing tables when link failures occur,inevitably causing network outage.As a result,academics and industry proposed to employ reactive routing protection solutions to deal with network failures in the network.The proactive schemes compute backup paths in advance,so that packets can be forwarded over those precomputing paths after the detection of network failures.However,the existing routing protection algorithms are facing two problems:1)the disjointness of the default path with respect to the backup path is very low,i.e.,ECMP and LFA,2)in order to compute two paths which have high disjointness,some restrictions must be impressed on default path,i.e.,the default path is not the shortest path.This paper first introduced the problem of constructing disjoint paths into integer programming problems,and then proposed to use the heuristic algorithm to calculate the approximate optimal solution.Finally,the algorithms were carried out in the real,measured and generated networks.The experimental results show that the proposed algorithms can greatly enhance the disjointness of the shortest path and the backup path,and improve the network availability.
Distributed Knowledge Framework of IOT Software Based on WSAN and Its Implementation Strategy
LIU Zheng-hao, ZHANG Guang-quan
Computer Science. 2019, 46 (1): 148-154.  doi:10.11896/j.issn.1002-137X.2019.01.023
Abstract PDF(1530KB) ( 811 )   
References | Related Articles | Metrics
With the development of the IOT (Internet of things) and the emergence of intelligent sensors,the traditional software has gradually exposed the shortcomings of its own performance.In order to meet the needs of equipment autonomy and utilize the computing resources of the edge network,the characteristics of the IOT software were analyzed and a distributed knowledge framework with its implementation method was put forward.By identifying the environmental logic and defining its evolutionary rules,the logic was embedded in the underlying wireless nodes.Then smart sensors monitor environmental changes and trigger related software logic for the correct software operation.Finally,the operation performance of the method in the environment and the analysis of the relevant indicators explain the applicability of proposed method.
Link Prediction Method Based on Local Community and Nodes’ Relativity
YANG Xu-hua, YU Jia, ZHANG Duan
Computer Science. 2019, 46 (1): 155-161.  doi:10.11896/j.issn.1002-137X.2019.01.024
Abstract PDF(1763KB) ( 1246 )   
References | Related Articles | Metrics
Link prediction methods based on the network topology information are effect ways to predict unknown or future network edges.In real applications,further extraction and analyzation of network topology is helpful to improve the precision of network link prediction.This paper proposed a new link prediction method based on local community and nodes’ relativity(HCRP).By expanding the first-level local communities to second-level ones,this method reveals more network topological information compared with first-level local communities.This method takes the shortest path of the second-level local community,coefficients of edge clustering and the impact of linking edge density on the simila-rity of two seed nodes into consideration when calculating relative coefficients between two seed nodes by using Pearson correlation coefficient,thus obtaining excellent prediction results of network linking edges.The algorithm was tested and compared with 11 well-known algorithms on 10 real network data sets.Results show that this algorithm has excellent performance of link prediction.
Anti-collision Route Planning of UAVs for Charging Ground WSN
HU Jie, LAN Yu-bin, OUYANG Fan
Computer Science. 2019, 46 (1): 162-168.  doi:10.11896/j.issn.1002-137X.2019.01.025
Abstract PDF(2108KB) ( 918 )   
References | Related Articles | Metrics
In the application of wireless charging for large-scale ground sensor nodes by multi-UAVs,route planning schemes have a great influence on the coverage probability and the lifetime of WSN.However,the limited flight duration and collision avoidance constraints increase the difficulty of flight route planning.Firstly,this paper proposed a centra-lized Sequential Greedy Route Planning Scheme (SGRP).In SGRP,with the position information of ground nodes,UAVs add nodes sequentially to the task sets and put them to the most appropriate order of the routes.Theoretical analysis proves that even in the worst case,SGRP can guarantee at least 50% performance compared with optimal route planning scheme.Secondly,based on SGRP,with the modified CPA collision detection model,this paper designed a Sequential Greedy Anti-collision Route Planning Scheme(SGACRP).In SGACRP,an optimal matched combination of node,UAV and ordered path is selected at each iteration,maximizing the utility and meeting the requirements of the limited UAVs’ resource and collision avoidance.Finally,by taking the time-discounted reward function as the utility function,simulation proves the effectiveness of the anti-collision measure,and verifies that though an increased charging completion time of WSN is caused by the proposed anti-collision measure,no loss is made to the monitoring probability of WSN.On the other hand,simulation proves that setting the static scores of nodes as different values according to their distance to objects can effectively improve the monitoring probability of ground WSN.
Efficient Multicast Schemes for D2D Multicast Cluster in Wireless Cellular Network
CHI Kai-kai, TANG Ze-feng, ZHU Yi-nan, SHAO Qi-ke
Computer Science. 2019, 46 (1): 169-174.  doi:10.11896/j.issn.1002-137X.2019.01.026
Abstract PDF(1653KB) ( 842 )   
References | Related Articles | Metrics
In wireless cellular network,using device-to-device communication technique can effectively offload the traffic of base station (BS).This paper studied the effective multicast of data from BS to multiple devices in a small area (like a building),and proposed a D2D communication based multicast scheme which is able to adjust packet reception ratio (PRR) and the average packet retransmission times (APRT).Furthermore,this paper optimized the maximal number of packet transmission retries to achieve PRR constrained APRT minimization (APRT-M) and APRT constrained PRR maximization (PRR-M),respectively.Compared to the traditional multicast scheme,the proposed scheme greatly reduces the multicast traffic load of BS.APRT-M minimizes APRT at the cost of reducing PRR as much as possible (but not smaller than the given threshold),whereas PRR-M maximizes PRR at the cost of increasing APRT as much as possible (but not larger than the given threshold).
Information Security
Chaotic Mapping Asynchronous Authentication Key Agreement Scheme with Smart-cards
WANG Song-wei, CHEN Jian-hua
Computer Science. 2019, 46 (1): 175-181.  doi:10.11896/j.issn.1002-137X.2019.01.027
Abstract PDF(1392KB) ( 910 )   
References | Related Articles | Metrics
Identity authentication is an important means to ensure information security.Chaos mapping indentity authentication scheme has become a hot research topic recently because of its high efficieny.In 2015,Zhu proposed an improved chaotic mapping protocol,and claimed that it can oppose impersonation attack and dictionary attack,and it also can provide user anonymity.However,Tong et al.pointed out Zhu’s protocol has the problems of offline dictionary attack,impersonation attack and can’t guarantee user’s anonymity,and proposed a new improvement protocol(short for TC scheme).Aiming at Zhu and TC protocol schemes,this paper pointed out their security defects,for example,the forward security can’t be guaranteed and they are easy suffering from denial of service attack.Meanwhile,this paper proposed a new protocol scheme using smart card.The security analysis and the comparison results with other related protocols indicate the high security and practicability of the porposed protocol.
Cloud-based Lightweight RFID Group Tag Authentication Protocol
LI Lu-lu, DONG Qing-kuan, CHEN Meng-meng
Computer Science. 2019, 46 (1): 182-189.  doi:10.11896/j.issn.1002-137X.2019.01.028
Abstract PDF(1634KB) ( 798 )   
References | Related Articles | Metrics
As a key technology for indentifying objects in the Internet of Things (IoT),radio frequency identification (RFID) technology has been widely used because of its advantages,such as low cost and easy to carry.The RFID system based on cloud storage technology has a more widely application market comparing with the traditional RFID system,but its security and privacy issues are more serious.At the same time,many existing group authentication protocols don’t meet the lightweight requirements,and have the lost synchronization problem in key updating process.This paper proposed a cloud-based lightweight RFID tag group authentication protocol.This protocol is based on the Hash function,which not only resolves these issues above,but also filters out the invalid and fake labels.Finally,this paper conducted the formal analysis of the proposed protocol by using BAN logic.The security target analysis shows that the proposed protocol can resist the multi-DOS attack and other common attacks,and possesses the forward security.
Privacy Protection Algorithm Based on Multi-characteristics of Trajectory
XU Hua-jie, WU Qing-hua, HU Xiao-ming
Computer Science. 2019, 46 (1): 190-195.  doi:10.11896/j.issn.1002-137X.2019.01.029
Abstract PDF(1433KB) ( 1021 )   
References | Related Articles | Metrics
Most of existing trajectory privacy protection algorithms based on trajectory clustering use spatial features as the standard when measuring the similarity between trajectories,ignoring the influence of other temporal and spatial characteristics of trajectories on trajectory similarity.In view of the fact that this situation may lead to the problem oflow availability of anonymous data,a protection algorithm based on integrated spatiotemporal characteristics of trajectory was proposed.The proposed algorithm combines the uncertainty of trajectory data,and uses the difference of 4 aspects of direction,speed,time and space to measure similarity between trajectories,in order to improve the similarity between the trajectories in the same cluster set.And then the trajectories of the same clustering set are spatially shifted to achieve the k-anonymization of the trajectories in the same clustering set.The experimental results show that compared with the classical privacy protection algorithm,the trajectory data protected by proposed algorithm as a whole has higherdata availability under certain privacy protection requirements.
Asymmetric JPEG Steganography Based on Correlation in DCT Domain
MAO Bing-hua, WANG Zi-chi, ZHANG Xin-peng
Computer Science. 2019, 46 (1): 196-200.  doi:10.11896/j.issn.1002-137X.2019.01.030
Abstract PDF(1253KB) ( 904 )   
References | Related Articles | Metrics
The current distortion functions used for JPEG steganography allocate a same embedding cost for ±1 embedding change.However,due to the correlation of DCT domain in JPEG images,the effects of ±1 to modifying the content of the image are different,so the corresponding embedding cost should also be different.Based on the DCT correlation concept,a universal cost-value optimization method suitable for JPEG steganography was proposed in this paper,which mainly considers the correlation of the coefficients in the same position of adjacent DCT blocks in the JPEG image.The current DCT coefficients are predicted by averaging the DCT coefficients of the same position in the eight neighborhood blocks.For the existing JPEG distortion function,the embedding cost for ±1 is differentiated by the principle of moving closer to the predicted value.The adjusted embedding cost can guide the stego-modified DCT coefficients to move closer to the predicted value and enhance the correlation of the DCT domain to improve steganographic security.This method can be used in combination with any existing JPEG distortion function.Experimental results show that this method can increase the security of the existing JPEG steganography by 2.4% of test error on average without increasing the time complexity of the original algorithm.
SQL Injection Intrusion Avoidance Scheme Based on Automatic Insertion of Dataflow-relevant Filters
YIN Zhong-xu, ZHANG Lian-cheng
Computer Science. 2019, 46 (1): 201-205.  doi:10.11896/j.issn.1002-137X.2019.01.031
Abstract PDF(1286KB) ( 1132 )   
References | Related Articles | Metrics
SQL injection is a widespread vulnerability in dynamic Web applications.This paper analyzed the necessary conditions for the production and exploitation of injection vulnerabilities,and made a distinctive protection for different types (digital type,character type and search type) of injection variables.Then,this paper dissected both the host language and object language to locate the query variables and their types in the SQL statement,and constructed the data dependency subgraph including source point and sink point on the basis of control flow graph.Aiming at this subgraph,this paper designed a filter insertion algorithm and defined filter policies according to different input and query types.Meanwhile,this paper implemented a dataflow analysis based scheme which automatically inserts filters before relevant database operation.At last,this paper analyzed and tested the proposed scheme.The results suggest the effectiveness of the proposed scheme.
Privacy Preserving Algorithm Based on Dynamic Update in Medical Data Publishing
CHEN Hong-yun, WANG Jie-hua, HU Zhao-peng, JIA Lu, YU Ji-wen
Computer Science. 2019, 46 (1): 206-211.  doi:10.11896/j.issn.1002-137X.2019.01.032
Abstract PDF(1338KB) ( 1005 )   
References | Related Articles | Metrics
With the development of information technology,the privacy protection technology in medical data publishing has always been a hotspot in data privacy research.One of the important issues is the synchronous update of medical data publishing.To solve the synchronization problem of medical data’s anonymous publication,an algorithm based on (α,k) anonymous dataset to support dynamic update of data was proposed,i.e.,(α,k)-UPDATE.By calculating the semantic closeness,the algorithm is able to select the most similar equivalent class in (α,k)-anonymous dataset.Then the corresponding update operation is processed.The final dynamically updated dataset can satisfy (α,k)-anonymous and protect the patient’s privacy information effectively.The experimental results show that the algorithm can run stably and effectively in real environment,satisfies the real-time consistency of medical dataset and has the advantage of shorter operating time and less information loss.
Software & Database Technology
Software Operational Reliability Growth Model Considering End-user Behavior and Fault-correction Time Delay
YANG Jian-feng, ZHAO Ming, HU Wen-sheng
Computer Science. 2019, 46 (1): 212-218.  doi:10.11896/j.issn.1002-137X.2019.01.033
Abstract PDF(1528KB) ( 710 )   
References | Related Articles | Metrics
Most of traditional software reliability models assume that the testing environment and the opera-ting environment are the same,that is,the software reliability model using failure data during the testing phase can predict the ope-rational reliability.It is well known that correcting bugs will improve software reliability,while another phenomenon occurs:the failure rate has decreased asthe users are more familiar with the system.In this paper,the inherent fault-detection process (IFDP),inherent fault-correction process (IFCP) and external fault-detection process (EFDP) were discussed.Moreover,a software operational reliability growth model considering end-user behavior and fault-correction time delay was proposed.By using the real data from end-users bug tracking data for open source software,the numerical results show that the proposed model is useful and powerful.
Code-predicting Model Based on Method Constraints
FANG Wen-yuan, LIU Yan, ZHU Ma
Computer Science. 2019, 46 (1): 219-225.  doi:10.11896/j.issn.1002-137X.2019.01.034
Abstract PDF(1825KB) ( 770 )   
References | Related Articles | Metrics
The state-of-the-art study shows that extracting the code features from a large amount of source codes and building the statistical language model have good predictive ability for the codes.However,the present methods still can be polished in the predicting accuracy,because when they build the existing statistical language model,the text information in the codes is often used as feature words,which means that the syntax structure information of the codes can not be fully utilized.In order to improve the predicting performance of the code,this paper proposed the concept of the constraint relation of methods.Based on this,this paper studied the method invocation sequence of Java objects,abstracted code features,and built the statistical language model to complete the code prediction.Moreover,this paper studied the application scope of the prediction model based on the method constraint relationship in Java language.Experiments show that this method improves the accuracy by 8% compared with the existing model.
Fault Tree Module Expansion Decomposition Method Based on Liner-time Algorithm
SONG Jun-hua, WEI Ou
Computer Science. 2019, 46 (1): 226-231.  doi:10.11896/j.issn.1002-137X.2019.01.035
Abstract PDF(1382KB) ( 716 )   
References | Related Articles | Metrics
Fault tree is widely used to analyze the security in many safety-critical fields including nuclear industry,aerospace engineering and traffic control.However,large amount of computation resources are consumed when large-scaled fault tree is analyzed in major industries such as nuclear power station,leading to low efficiency and excessive time consumption.In order to solve this problem,this paper made several extensions based on existing linear-time algorithm,and proposed a new fault tree reduction rules and modular derivation based decomposition algorithm.Firstly,the concept of equivalent event is presented to extend the number of modules decomposed by linear-time algorithm.Based on the consideration of time complexity and resource utilization,new reduction rules are proposed to get rid of the redundant information in fault tree.Experimental results show that the proposed decomposition method can optimize the fault tree ana-lysis effectively,and reduce the time consumption and memory usage when dealing with the large-scaled fault tree.
Reverse k Nearest Neighbor Queries in Time-dependent Road Networks
LI Jia-jia, SHEN Pan-pan, XIA Xiu-feng, LIU Xiang-yu
Computer Science. 2019, 46 (1): 232-237.  doi:10.11896/j.issn.1002-137X.2019.01.036
Abstract PDF(1577KB) ( 765 )   
References | Related Articles | Metrics
Most existing efficient algorithms for reverse k nearest neighbor query focus on the Euclidean space or static networks,and few of them study the reverse k nearest neighbor query in time-dependent networks.However,the exis-ting algorithm is inefficient if the density of interest points is sparse or the value of k is large.To address these problems,this paper proposed a sub net division based reverse k nearest neighbor query algorithm mTD-SubG.Firstly,the entire road network is divided into subnets with the same size,and they are expanded to other subnets through the border nodes to speed up the search process for interest points.Secondly,the pruning technology is utilized to narrow the expansion range of road network.Finally,the existing nearest neighbor query algorithm of time-dependent road networks is used for each searchedinterest points to determine whether it belongs to the reverse k nearest neighbor results.Extensive experiments were conducted to compare the proposed algorithm mTD-SubG with the existing algorithm mTD-Eager.The results show that the response time of mTD-SubG is 85.05% less than that of mTD-Eager,and mTD-SubG reduces the number of traversed nodes by 51.40% compared with mTD-Eager.
Artificial Intelligence
Cross-language Knowledge Linking Based on Bilingual Topic Model and Bilingual Embedding
YU Yuan-yuan, CHAO Wen-han, HE Yue-ying, LI Zhou-jun
Computer Science. 2019, 46 (1): 238-244.  doi:10.11896/j.issn.1002-137X.2019.01.037
Abstract PDF(1301KB) ( 951 )   
References | Related Articles | Metrics
Cross-language knowledge linking (CLKL) refers to the establishment of links between encyclopedia articles in different languages that describe the same content.CLKL can be divided into two parts:candidate selection and candidate ranking.Firstly,this paper formulated candidate selection as cross-language information retrieval problem,and proposed a method to generate query by combining title with keywords,which greatly improves the recall of candidate selection,reaching 93.8%.In the part of the candidate ranking,this paper trained a ranking model by mixing bilingual topic model and bilingual embedding,implementing military articles linking in English Wikipedia and Chinese Baidu Baike.The evaluation results show that the accuracy of model achieves 75%,which significantly improves the perfor-mance of CLKL.The proposed method does not depend on linguistic characteristics and domain characteristics,and it can be easily extended to CLKL in other languages and other domains.
S-shaped Function Based Adaptive Particle Swarm Optimization Algorithm
HUANG Yang, LU Hai-yan, XU Kai-bo, HU Shi-juan
Computer Science. 2019, 46 (1): 245-250.  doi:10.11896/j.issn.1002-137X.2019.01.038
Abstract PDF(1470KB) ( 1102 )   
References | Related Articles | Metrics
Aiming at the problems of low solution precision and slow convergence speed in the later stage of particle swarm optimization algorithm,this paper presented an S-shaped function based adaptive particle swarm optimization algorithm (SAPSO).This algorithm takes advantage of the characteristics of upside-down S-shaped function to adjust the inertia weight nonlinearly,better balancing the global search ability and local search ability.In addition,an S-shape function is introduced into the position updating equation,and the ratio of the individual particle’s fitness value to the swarm’s average fitness value is used to adaptively adjust the step size in the search,thus enhancing the efficiency of the algorithm.Simulation results on a set of typical test functions show that SAPSO is superior to several existing improved PSO algorithms significantly in terms of the convergence rate and solution accuracy.
iBTC:A Trajectory Clustering Algorithm Based on Isolation Forest
ZHANG Huai-feng, PI De-chang, DONG Yu-lan
Computer Science. 2019, 46 (1): 251-259.  doi:10.11896/j.issn.1002-137X.2019.01.039
Abstract PDF(3476KB) ( 880 )   
References | Related Articles | Metrics
The clustering of moving object trajectories has important theoretical and practical value in the fields of urban planning,public space design and mobile object behavior prediction.In this paper,aiming at the poor clustering effect of traditional clustering algorithms (e.g.,k-means,DBSCAN) on moving object trajectories,a new trajectory clustering algorithm named iBTC was proposed.The algorithm segments the trajectory firstly,and based on the principle of minimum description length,the trajectory segmentation problem is modeled as the shortest path problem for undirected graphs.Then,Dijkstra algorithm is used to find the optimal segmentation.Next,based on the idea of Isolation Forest,the trajectory clustering problem is transformed into a special anomaly detection problem,and the algorithm uses partition-merging procedure to cluster trajectory data.Finally,experiments were performed on the simulated data set and pedestrian track data recorded by monitor.The results show that the algorithm can achieve good clustering effect.
Attention Based Acoustics Model Combining Bottleneck Feature LONG Xing-yan QU Dan ZHANG Wen-lin
LONG Xing-yan, QU Dan, ZHANG Wen-lin
Computer Science. 2019, 46 (1): 260-264.  doi:10.11896/j.issn.1002-137X.2019.01.040
Abstract PDF(1400KB) ( 1131 )   
References | Related Articles | Metrics
Currently,attention mechanism based sequence-to-sequence acoustic models has become a hotspot of speech recognition.In view of the problem of long training time and poor robustness,this paper proposed an acoustical model combining bottleneck features.The model is composed of the bottleneck feature extraction network based on deep belief network and the attention-based sequence-to-sequence model.DBN introduces the priori information of the traditional acoustic model to speed up the model convergence rate and enhance robustness and distinction of bottleneck feature.Attention model uses the timetemporal information of voice feature sequence to calculate the posterior probability of phoneme sequence.On the basis of the baseline system,the training time is decreased by reducing the layer number of the recurrent neural network in the attention model,and the recognition accuracy is optimized by changing the input dims and outputs of the bottleneck feature extraction network.Experiments on TIMIT dataset show that in the core test set,the phoneme error rate decreases to 17.80%,the average time training time during an iteration decreases by 52%,and the epochs of training iterations decreases to 89 from 139.
Spatial Estimation Method of Air Quality Based on Terrain Factors LV Ming-qi LI Yi-fan CHEN Tie-ming
LV Ming-qi, LI Yi-fan, CHEN Tie-ming
Computer Science. 2019, 46 (1): 265-270.  doi:10.11896/j.issn.1002-137X.2019.01.041
Abstract PDF(1650KB) ( 913 )   
References | Related Articles | Metrics
Monitoring air quality is important for pollution evaluation,harm reduction and environmental protection.However,since the number of air quality monitoring stations is extremely limited and air quality varies non-linearly with the change of location,spatial estimation of air quality (i.e.,estimating the air quality of any location without an air quality monitor station) is a challenging task.Currently,the most state-of-the-art spatial estimation methods for air quality take the factors such as traffic flow,human mobility and POI into account,and build estimation models based on machine learning.However,there are still some limitations in these methods.On one hand,the considered factors mainly reflect the characteristics of urban area,so these methods are constrained to be used in urban area.On the other hand,these methods train models based on features directly extracted from the factors without refinement.Aiming at these problems,this paper proposed a spatial estimation method of air quality based on terrain factors.First,terrain database is established and terrain features are extracted.Then,the original terrain features are deeply converted based on an ensemble decision tree model.Finally,a regression model is trained based on factorization machine.The experiments on real datasets suggest that the proposed method has advantage in terms of estimating the air quality over the areas with natural terrain (e.g.,highland,forest,water).
Inter-regional Accessibility Evaluation Model of Urban Based on Taxi GPS Big Data
WANG Ying-bo, SHAN Xiao-chen, MENG Yu
Computer Science. 2019, 46 (1): 271-277.  doi:10.11896/j.issn.1002-137X.2019.01.042
Abstract PDF(2072KB) ( 989 )   
References | Related Articles | Metrics
The evaluation of inter-regional accessibility plays an important role in improving the efficiency of ground traffic in cities.Traditional inter-regional accessibility evaluation methods make use of the inter-regional linear distance to calculate the regional average travel time,leading to big error between average value and actual value,and the result of inter-regional accessibility measurement method based on hotspot statistics of taxi boarding area quantifying the areas with uneven travel destination distribution is unsatisfactory.In order to solve the problem of inaccurate inter-regional accessibility evaluation caused by the above two points,this paper constructed an inter-area accessibility evaluation modelbased on GPS,and extracted a complete trip from the taxi GPS data to calculate the actual travel time,so as to improve the accuracy of average travel time.On this basis,this paper proposed a quantitative calculation model of accessibility rate based on four-dimensional OD matrix,and used the accessibility rate as the quantification standard of accessibility to solve the problem of inaccurate evaluation of inter-regional accessibility caused by uneven travel destination distribution of some areas.Experiments show that the accuracy of the proposed accessibility evaluation model is 9.4%~28.7% higher than the traditional method,especially in the area with uneven distributed travel destination,the improvement of accessibility evaluation is significant.
Graphics ,Image & Pattern Recognition
3D Model Retrieval Based on Deep Convolution Neural Network
LIU Zhi, LI Jiang-chuan
Computer Science. 2019, 46 (1): 278-284.  doi:10.11896/j.issn.1002-137X.2019.01.043
Abstract PDF(3617KB) ( 1134 )   
References | Related Articles | Metrics
In order to make use of 3D model data set for feature self-learning more effectrively,this paper proposed a 3D model retrieval method,in which the natural images are as input sources,a better view set of 3D model is as basis,and the depth features obtained through training deep convolutional neural networks are applied for retrieval.Firstly,the 3D model views are extracted from multiple viewpoints and the optimal view is selected according to the order of gray entropy.Secondly,the view set is trained through deep convolution neural network,the depth features of the optimal view are extracted and its dimension is reduced.At the same time,edge contouring is extracted for the input natural image and a set of 3D models is gotten after similarity matching.Finally,according to the ratio of the total number of similar models to the length of the search list in the retrieval result,the retrieval list is reordered and the final result will be gained.Experimental results show that the algorithm can effectively use depth convolution neural network to extract the depth feature of the view of the 3D model,meanwhile reduce the difficulty of obtaining the input source,and improve the retrieval efficiency effectively.
Prediction of Malignant and Benign Gastrointestinal Stromal Tumors Based on Radiomics Feature
LIU Ping-ping, ZHANG Wen-hua, LU Zhen-tai, CHEN Tao, LI Guo-xin
Computer Science. 2019, 46 (1): 285-290.  doi:10.11896/j.issn.1002-137X.2019.01.044
Abstract PDF(1856KB) ( 946 )   
References | Related Articles | Metrics
Gastrointestinal stromal tumors(GIST) are the most common mesenchymal tumors of the gastrointestinal tract with non-directional differentiation,varying malignancy potential and deficient specificity.Therefore,it is a more concerned issue to diagnosis benign or malignant of GIST.However,it is relatively difficult to use pathological biopsy and CT imaging to study solid tumors heterogeneity.This paper proposed a noninvasive method based on a large number of quantitative radiomics features extracted from CT images and SVM classifier to discriminate benign or malignant of GIST.120 patients with GISTs were enrolled in this retrospective study.Firstly,four non-texture features (shape features) and forty-three texture features were extracted from the tumour region of CT images of each patiant.For the initial feature set,ReliefF and forward selection were executed sequentially to feature selection.Then,SVM classifier was trained by the optimal feature subset for benign or malignant discrimination of GIST.14 texture features were selected for the optimal feature subset from the original feature set.The AUC,accuracy,sensitivity and specificity of the model were 0.9949,0.9277,0.9537 and 0.9018 in the training set,and 0.8524,0.8313,0.8197 and 0.8420 in the test set.The model established by the radiomics method provides a noninvasive detection method for predicting the benign or malignant of GIST,and this mothed maybe as an auxiliary diagnosis tool to improve the accuracy efficiently for malignant and benign discrimination of GIST.
Multi-hypothesis Reconstruction Algorithm of DCVS Based on Weighted Non-local Similarity
DU Xiu-li, HU Xing, CHEN Bo, QIU Shao-ming
Computer Science. 2019, 46 (1): 291-296.  doi:10.11896/j.issn.1002-137X.2019.01.045
Abstract PDF(3690KB) ( 891 )   
References | Related Articles | Metrics
Multi-hypothesis reconstruction algorithm of DCVS (Distributed Compressed Video Sensing) introduces the idea of multi-hypothesis prediction motion estimation of traditional video encoding into the DCVS encoding system,thus improving the reconstruction quality for video sequence.In this algorithm,the blocks with big changes adopt the information of current frame neighborhood blocks as a reference,and its performance needs to be improved when the neighborhood of frame contains lots of textures and details.Through improving the idea of non-local similarity,this paper proposed a multi-hypothesis reconstruction algorithm of DCVS based on weighted non-local similarity.In the improved algorithm,the weighted non-local similarity is adopted to search the self-similar blocks in adjacent reconstructed frames for the texture block in the block with big changes,finally generating supplementary reconstruction information blocks.For text non-texture blocks,the weighted non-local similarity is utilized to generate similar blocks.For the blocks with small changes,inter-frame multi-hypothesis reconstruction is adopted,and non-critical frame reconstruction is assisted.Simulation results based on different video sequences show that the proposed algorithm can improve the reconstruction quality of video sequence effectively,and has better reconstructed SSIM and PSNR,and the PSNR is about 1dB higher.
Infrared Image and Visible Image Fusion Based on FPDEs and CBF
LI Chang-xing, WU Jie
Computer Science. 2019, 46 (1): 297-302.  doi:10.11896/j.issn.1002-137X.2019.01.046
Abstract PDF(3044KB) ( 927 )   
References | Related Articles | Metrics
Considering the problems of low contrast,blocky effects,artifacts and distortion of the edge region in traditional fused images,this paper proposed an infrared image and visible image fusion method based on fourth order partial differential equations(FPDEs) and cross bilateral filter(CBF).Firstly,the FPDEs and CBF are respectively used to obtain the approximation layers and detail layers from the source image.Secondly,the approximate layers obtained by multi-scale decompositions may contain amount of residual low-frequence information which will result in large contrast of the overall visual of the fused image,so a fusion method based on visual saliency map(VSM) is used to fuse the approximate layers.Thirdly,an improved Karhunen-Loeve transform is applied into the detail layer to obtain the optimal weights for fusion.Finally,a fused image is generated from the linear combination of final approximate layers and detail layers.Experimental results show that the standard deviation of the fused image obtained by the proposed method increases about 43.3% than PCA based method and cross bilateral filter based method,and the average gradient and spatial frequency increase about 9.46% and 19.79% respectively on average compared with GFF and VSM_WLS algorithms.
Study on Bayonet Recognition Engine Based on Cascade Multitask Deep Learning
HE Xia, TANG Yi-ping, YUAN Gong-ping, CHEN Peng, WANG Li-ran
Computer Science. 2019, 46 (1): 303-308.  doi:10.11896/j.issn.1002-137X.2019.01.047
Abstract PDF(1697KB) ( 1251 )   
References | Related Articles | Metrics
Aiming at the complexity of environment,the diversity of requirements,the relevance of tasks and the real-time of identification in the process of converting the unstructured video data of bayonet into the intelligent structured information,this paper proposed a method of bayonet recognition engine based on cascade multitask deep learning.This method makes full use of the relationship between segmentation and detection recognition tasks to achieve high-precision,efficient,synchronous and real-time recognition of a variety of basic information of bayonet vehicles (motorcycle types,brands,series,colors and license plates etc.).First,the deep convolutional neural network is used to automatically extract the depth feature and the logical regression is performed on the feature map to extract the interested region from the complex background (including multi-vehicle object).And then the multitask deep learning network is used to achieve multilevel multitask recognition for the extracted vehicle objects.Experimental results show that the proposed method is superior to the traditional computer vision method and the existing recognition engine technology based on deep learning in terms of recognition accuracy and efficiency,and the accuracy of recognizing and detecting the motorcycle types,brands,series and license plates is more than 99% respectively,and the detection efficiency is increased by 1.6times.
Interdiscipline & Frontier
DNA Sticker Algorithm for k-vertex Induced Sub-graphs of Directed Graphs
ZHU Wei-jun, ZHANG Chun-yan, ZHOU Qing-lei, CHEN Yong-hua
Computer Science. 2019, 46 (1): 309-313.  doi:10.11896/j.issn.1002-137X.2019.01.048
Abstract PDF(1839KB) ( 917 )   
References | Related Articles | Metrics
How to obtain all the k-vertex induced sub-graphs in a directed graph is a complex computational problem.The DNA computing is a nonclassical computing technique developed in recent years,which can be employed to solve some complex computational problems via DNAs and their biochemical reactions.Aiming at obtaining all the k-vertex induced sub-graphs in a directed graph using the DNA computing,a DNA sticker algorithm for constructing sub-graphs was proposed.First,the existing sticker system provides the basic operators for the new algorithm.Each of these basic operators has its basic computational function,and each of these basic operators can be implemented by a specific stan-dard biochemical reaction.Second,these basic operations can be organized in a certain logical way.As a result,some program structures,such as sequence and loop,are formed.At last,all the k-vertex induced sub-graphs can be obtained for a given directed graph,by reading the results of the biochemical reactions.These simulated results demonstrate that the new algorithm significantly reduces the time which is required by generating sub-graphs under ideal conditions,compared with the classical algorithms.
Optimization of Breadth-first Search Algorithm Based on Many-core Platform
XU Qi-ze, HAN Wen-ting, CHEN Jun-shi, AN Hong
Computer Science. 2019, 46 (1): 314-319.  doi:10.11896/j.issn.1002-137X.2019.01.049
Abstract PDF(1526KB) ( 1118 )   
References | Related Articles | Metrics
Graph algorithms have important application value in many fields.With the development of social informatization,the amount of graph data to be processed has become larger and larger,the performance of graph algorithms has become a research hotspot.Breadth-first search algorithm is an important graph algorithm,studying its performance optimization techniques can provide reference for the performance optimization of other graph algorithms.The current work on new generation Xeon Phi many-core processors are based on top-down algorithm and do not take into account NUMA effects on performance.Based on the hybrid breadth-first search algorithm and the NUMA topology,this paper optimized Breadth-first search algorithm from the perspective of tasks allocation,vectorization and data preprocessing,designed and implemented a high-performance parallel breadth-first search algorithm on the Xeon Phi platform.A series of experimental results show that the optimized algorithm has gained 50%~145% improvement compared with Graph500 official tuned algorithm on different scales of test data.
Study on SIMD Method of Vector Math Library
ZHOU Bei, HUANG Yong-zhong, XU Jin-chen, GUO Shao-zhong
Computer Science. 2019, 46 (1): 320-324.  doi:10.11896/j.issn.1002-137X.2019.01.050
Abstract PDF(1543KB) ( 1106 )   
References | Related Articles | Metrics
It’s an inexorable trend from basic math library to vector math library with the occurrence of SIMD.But there are many difficulties because of complicated code and many branches of math library.On the other hand,SIMD instructions are not complete,so some functions are realized by frequent split and joint,which reduces the performance quickly.An effective vectoring method of vector math library was proposed in this paper.It consists of key code segment selection,data pre-processing vectoring and instruction vectoring.This method not only gets an effective performance improvement as much as possible,but also is a solid base for later depth optimization.The experimental results show that it can highly improve the functions’ performance such as exp,pow and log10 up to 24.2% on average respectively.