Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 46 Issue 7, 15 July 2019
Overview of Routing Availability in Intra-domain Routing Networks
GENG Hai-jun,ZHANG Shuang,YIN Xia
Computer Science. 2019, 46 (7): 1-6.  doi:10.11896/j.issn.1002-137X.2019.07.001
Abstract PDF(1317KB) ( 860 )   
References | Related Articles | Metrics
Route availability refers to the probability that a user can get the requested service.With the development of the Internet,a large number of real-time services have emerged,and the requirements for the timeliness of the network is becoming higher and higher,and high demands have been placed on the “self-repairing ability” of the Internet.However,network faults occur frequently,and routing loops and long convergence times may occur during the process of repairing network failures.The repair time is usually between several seconds and tens of seconds,which cannot meet the real-time requirements of the Internet.Therefore,improving routing availability has become an urgent problem to be solved.This paper summarized and analyzed the existing schemes to improve routing availability,and divided these schemes into two major categories,namely passive protection scheme and route protection scheme.The research results at home and abroad were introduced in detail,the advantages and disadvantages of each program were compared,the main contributions and shortcomings of these programs were summarized and analyzed,and the research directions were proposed for further research.
Survey on Deep-learning-based Machine Reading Comprehension
LI Zhou-jun,WANG Chang-bao
Computer Science. 2019, 46 (7): 7-12.  doi:10.11896/j.issn.1002-137X.2019.07.002
Abstract PDF(1292KB) ( 1949 )   
References | Related Articles | Metrics
Natural language processing is the key to achieving artificial intelligence.Machine reading comprehension,as the crown jewel in the field of natural language processing,has always been the focus of research in the field.With the rapid development of deep learning and neural network in recent years,machine reading comprehension has made great progress.Firstly,the research background and development history of machine reading comprehension were introduced.Then,by reviewing the important progress in the development of word vector,attention mechanism and answer prediction,the problems in recent research related to machine reading comprehension were proposed.Finally,the outlook of machine reading comprehension was discussed.
Summary of Stylized Line Drawing Generation
LIU Zi-qi, LIU Shi-guang
Computer Science. 2019, 46 (7): 13-21.  doi:10.11896/j.issn.1002-137X.2019.07.003
Abstract PDF(2570KB) ( 1327 )   
References | Related Articles | Metrics
Line drawing has a great advantage in the transmission of visual information.As a simple and effective means of visual communication,it stresses main features of the details so that people can get the main information quickly.At the same time,stylized line drawing,as an art form,enables people to appreciate and understand their artistic characte-ristics quickly.Line drawing generation technology can be divided into 2D image-based methods and 3D image-based methods.Line drawing generation technology based on 2D images includes deep learning method and traditional me-thod,which contains data drive method and non-data-driven method.Line drawing generation technology based on 3D model contains image space method,object space method and their blending method.By introducing and analyzing va-rious methods and analyzing the advantages and disadvantages of different methods with comparisons among them,this paper summarized the existing problems of line drawing generation technology and their possible solutions.And on this basis,the future development trend of line painting was prospected.
Review of Computer Aided Diagnosis for Parkinson’s Tremor and Essential Tremor
ZHANG Yu-qian,GU Dong-yun
Computer Science. 2019, 46 (7): 22-29.  doi:10.11896/j.issn.1002-137X.2019.07.004
Abstract PDF(1489KB) ( 664 )   
References | Related Articles | Metrics
The diagnosis of Parkinson’s tremor and essential tremor has been a clinical problem,and proper diagnosis is of vital importance for the treatment and rehabilitation of patients.With the development of sensor technology and artificial intelligence (AI),more and more scholars begin to use state-of-the-art technology to assist diagnosis of two diseases,and satisfied results were achieved.This paper summarized the wearable devices currently used for the diagnosis of two diseases and related AI classification algorithms,and discussed their advantages and limitations.Finally,this paper analyzed the main problems existing in the related researches and pointed out the possible research directions in this field.
Review of Shape Representation for Objects
WU Gang,XU Li-min
Computer Science. 2019, 46 (7): 30-37.  doi:10.11896/j.issn.1002-137X.2019.07.005
Abstract PDF(1318KB) ( 530 )   
References | Related Articles | Metrics
Shape retrieve and objection are widely applied into medical diagnostics,target recognition,image retrieve and computer vision,etc.The efficient retrieve and objection of shapes completely depend on an excellent shape representation algorithm.This paper proposed the assessment criterion for shape representation.Then,according to the criterion,the existing shape representations were categorized into linear combination representations,spatial association relationship,feature representation based on differential and integral property of shapes and deformation representations.Each of these methods was analyzed and accessed in terms of mathematical principle,the ability of multiscale representation,variants,robust,reconstruction of original shapes,identification of signal and noise,etc.Furthermore,the advantages and disadvantages of each algorithm were discussed,especially,explored from the principle of mathematics.Finally,the suggestions for the future research were also given.
Review on Click-through Rate Prediction Models for Display Advertising
LIU Meng-juan,ZENG Gui-chuan,YUE Wei,QIU Li-zhou,WANG Jia-chang
Computer Science. 2019, 46 (7): 38-49.  doi:10.11896/j.issn.1002-137X.2019.07.006
Abstract PDF(3900KB) ( 1083 )   
References | Related Articles | Metrics
In recent years,the study of the click-through rate prediction model has attracted much attention from academia and industry.As for the existing CTR prediction models for displaying targeted advertising,this paper studied the preprocessing techniques for features of samples,the CTR prediction schemes based on traditional machine learning models and the latest deep learning models,and the main performance evaluation indexes of CTR prediction models.Specially,these typical CTR prediction schemes were evaluated based on a public dataset,further some quantitative analysis and performance comparison were given.Finally,the problems and research trends in CTR prediction were discussed.
Network & Communication
Content-centric Routing Scheme in Vehicular Social Networks
SHI Jun-ling,WANG Xing-wei,HUANG Min
Computer Science. 2019, 46 (7): 50-55.  doi:10.11896/j.issn.1002-137X.2019.07.007
Abstract PDF(1954KB) ( 378 )   
References | Related Articles | Metrics
To satisfy the requirement for multimedia (e.g.,video) of Vehicular Social Network (VSN) users,a Content-centric routing scheme in VSN (CVSN) was proposed based on Information-Centric Networking (ICN) architecture.In CVSN,based on the similar content store metrics among vehicle nodes,interest routing selects the forwarding node for an interest packet,while based on the same way probability metrics among vehicle nodes,data routing selects the forwarding node for a data packet.Meanwhile,the in-network caching content is managed according to the interest preference metrics among VSN users.When the caching space is full,the content with low interest preference of user is dropped firstly.Oriented to the bus scenario in cities,by comparing the existed schemes on packet delivery,average hops,average delay and network overhead,simulation experiments show that the proposed scheme is feasible and effective.
Enhancement Method of Throughput in Ultra-dense Network Based on Hierarchical Multi-hop Physical Layer Network Coding
JI Bao-feng, WANG Yi-dan, XING Bing-bing, LI Yu-qi, GAO Hong-feng, HAN Cong-cheng
Computer Science. 2019, 46 (7): 56-60.  doi:10.11896/j.issn.1002-137X.2019.07.008
Abstract PDF(1982KB) ( 444 )   
References | Related Articles | Metrics
The ultra-dense network enhances the reuse of space through the dense deployment of small cells,which is an effective solution to solve the data flow by 1 000 times and the user experience rate of 10~100 times in the future 5G.However,the interference problem caused by small-area intensive deployment and the multi-hop transmission caused by the small coverage of small base stations will reduce the network capacity and user experience.Therefore,in order to consider the problem of “overlay” and “capacity” of ultra-dense networks in 5G,a throughput enhancement method based on hierarchical multi-hop physical layer network coding was proposed.By using the heterogeneous hierarchical feature of multi-node and combining with the high spectral efficiency characteristics of multi-hop physical layer network coding,the throughput in ultra-dense network can be enhanced effectively,the time slot from source node to the terminal node can be saved and the interference of system can be reduced without the need for the direct link from source node to terminal node in the case.Finally,this paper verified the validity and correctness of the proposed scheme and the system performance was improved effectively compared with the traditional scheme.
Study on Energy Efficient M2M Uplink Subcarrier and Power Allocation in LTE-A Network
HE Ying-qing,LI Ning,WANG Cong,CHEN Yan-cheng,XU Jian-hui
Computer Science. 2019, 46 (7): 61-66.  doi:10.11896/j.issn.1002-137X.2019.07.009
Abstract PDF(2100KB) ( 411 )   
References | Related Articles | Metrics
Improving the energy efficiency of devices in M2M communication,and prolonging the lifetime of M2M devices which are powered by battery is a critical problem.In this paper,the energy efficient sub-channel and power allocation problem was studied for M2M communication over LTE-A cellular uplinks.Under delay requirement of M2M devices and resources allocation restrains of uplink communication,the sub-channel and power allocation problem was formulated.Due to the high computational complexity of solving this problem directly,a lagrange-basic resources allocation algorithm was further proposed in this paper,which can obtain acceptable power and energy efficiency performances with less complexity.The simulation results show that the proposed algorithm outperforms the Greedy algorithm and is more close to the optimal algorithm in power and energy efficiency performances.
Study on TDOA Direction Finding of Iterative Model Without Sound Speed
HOU Dong-sheng,WANG Hai,CUI Xun-xue
Computer Science. 2019, 46 (7): 67-73.  doi:10.11896/j.issn.1002-137X.2019.07.010
Abstract PDF(2294KB) ( 415 )   
References | Related Articles | Metrics
In view of the time-difference-of-arrival sound-source direction finding problem,if the velocity parameter of propagating signal about sound source is known,combining with estimated result of the Direction of Arrival (DOA) of Linear Least Squares (LLS),the direction finding problem can be solved by iterative algorithm based on maximum likelihood principle.However,in some conditions,it is difficult to obtain velocity parameter of sound source,or measured deviation is large,which will affect the estimation accuracy of DOA.According to the characteristic of LLS about direction finding without sound velocity,and the characteristic of iterative algorithm based on maximum likelihood principle directly solving the azimuth and elevation of sound source,this paper proposed an iterative model without sound speed.The Taylor Series expansion method and the Levenberg-Marquart method are used to solve the problem,without knowing the velocity parameters of the sound signal beforehand.In addition,the mean square error of direction finding about Taylor Series expansion method and the Levenberg-Marquart method,and the Cramer-Rao Lower Bound of iterative model without sound speed are derived.The simulation results show that when measurement errors exist in sound speed,the proposed method is obviously superior to the existing methods.
Geographic Routing Protocol Based on Prediction for Urban Vehicular Ad Hoc Networks
HUANG De-ling,YAN Yu-song,PENG Da-qin
Computer Science. 2019, 46 (7): 74-80.  doi:10.11896/j.issn.1002-137X.2019.07.011
Abstract PDF(2518KB) ( 425 )   
References | Related Articles | Metrics
The topology of vehicular ad hoc networks changes rapidly,which makes wireless connection between nodes unstable.Therefore,greedy forwarding based location routing protocols often fail because of the disconnection between nodes.Aiming at this problem,this paper proposed a method to judge the reliability of link.In this method,the reliability of transmission links is evaluated by calculating link stability factor and distance attenuation factor of each neighbor node.As a result,this paper designed a routing protocol in which the highest reliability links are used to form the route,consequently increasing the packet delivery success ratio.At the same time,this paper presented an algorithm to predict the intersection nodes,which makes the data packet selectively utilize the intersection coordinators to determine the data transmission path,enhancing the routing efficiency.Simulation results show that the proposed algorithm achieves better routing performance in terms of packet delivery success rate,end-to-end delay and packet forwarding times.
Distributed Subgradient Optimization Algorithm with Communication Delays for Multi-agent Switched Networks
WANG Jun-ya,LI Jia-di,LI De-quan
Computer Science. 2019, 46 (7): 81-85.  doi:10.11896/j.issn.1002-137X.2019.07.012
Abstract PDF(1521KB) ( 299 )   
References | Related Articles | Metrics
In the general non-balanced directional switching network,there may be communication delays among agents in networks.In light of this,this paper proposed a distributed subgradient optimization algorithm with communication delays for multi-agent switched networks.In this method,the unconstrained convex optimization problem with communication delays is converted into the unconstrained convex optimization problem without communication delays by augmenting delay nodes in communication network.The convergence of the multi-agent distributed subgradient optimization algorithm with communication delays is proved by using the non-quadratic Lyapunov function method,as long as the non-balanced directional switching network is periodical strong connectivity and the communication delays are upper bounded.Because the network topology conditions and communication delay are both intensively considered,this algorithm is more general and practical.Finally,a simulation example was given to demonstrate the effectiveness of the proposed algorithm.
Information Security
Study on Malicious Program Detection Based on Recurrent Neural Network
WANG Le-le,WANG Bin-qiang,LIU Jian-gang,ZHANG Jian-hui,MIAO Qi-guang
Computer Science. 2019, 46 (7): 86-90.  doi:10.11896/j.issn.1002-137X.2019.07.013
Abstract PDF(1320KB) ( 463 )   
References | Related Articles | Metrics
In view of the low efficiency of traditional malicious program detection and the lack of automatic analysis of malicious programs,this paper studied to use recurrent neural networks to detect and classify malicious programs in deep learning environment.First,the QEMU is used to capture the API and its parameter sequence that are called when the malicious program runs,after the behavior abstraction,the characteristic sequence of the malicious program is formed.Then the feature sequence is mapped to a fixed length word vector by using a logarithmic bilinear model (HLBL),and these word vectors are synthesized into an input matrix of a recursive neural network (RNN).Through the training of the recursive neural network model,a multi-layer semantic aggregation model of malicious programs is established to complete the classification detection of malicious programs.The experimental data show that the recursive neural network model can detect malicious program effectively in the classification of malicious program detection.Compared with the traditional machine learning algorithm,its detection rate has increased by 17%.In particular,when the concept of tensors is introduced,after using the Recursive Neural Tensor Network (RNTN) model,the detection rate is increased by 7% compared to the RNN model by reducing the overall number of parameters and the amount of calculations.The experimental data fully show that the recursive neural network model can complete the detection and classification of malicious programs in big data environment.
Efficient Public-key Searchable Encryption Scheme Against Inside Keyword Guessing Attack
WANG Shao-hui,ZHANG Yan-xuan,WANG Hua-qun,XIAO Fu,WANG Ru-chuan
Computer Science. 2019, 46 (7): 91-95.  doi:10.11896/j.issn.1002-137X.2019.07.014
Abstract PDF(1278KB) ( 832 )   
References | Related Articles | Metrics
In the cloud environment,how to search users’ encrypted data efficiently is the research hotspot in academic circle.Most current public-key searchable encryption schemes cannot effectively resist the Inside Keyword Guessing Attack (IKGA) launched by cloud servers,while the existing anti-IKGA schemes suffer the problems of low efficiency or the same search trapdoors generation algorithm for same keyword,which would reveal statistics information of keywords.This paper proposed a new efficient anti-IKGA public-key searchable encryption scheme,in which the search trapdoor is generated by a non-deterministic algorithm.Based on the modified DLIN (Decision Linear Problem) assumption,the new scheme is certified to satisfy semantic security against IKGA in the random oracle model.In the new scheme,the trapdoors are generated with random numbers thus same keyword has various trapdoors.Compared with other PEKS schemes,the new scheme reduces the number of bilinear pairing operations and thus has better performance advantages.
AB-ACCS Scheme for Revocation of Efficient Attributes in Cloud Storage Services
Computer Science. 2019, 46 (7): 96-101.  doi:10.11896/j.issn.1002-137X.2019.07.015
Abstract PDF(1574KB) ( 525 )   
References | Related Articles | Metrics
In order to improve the security and efficiency of cloud storage access control (ACCS),cloud storage service technologies at home and abroad provide security support for authentication,user authorization,data integrityand encryption methods,but they only use https in the communication process.The protocol encrypts the packet or re-encrypts the data file by a third-party agency,resulting in data security risks in cross-domain sharing.In the encryption process,there are some problems such as large computational overhead and low efficiency.In order to solve the above problems,this paper proposed an AB-ACCS scheme for revocation of efficient attributes in cloud storage services.The solution uses an improved CP-ABE for access control.Without referring to a third-party agency,the CSP performs ciphertext re-encryption operations,which reduces the communication burden between authorities and users.At the same time,in order to improve the efficiency of the program in access control,new file creation,new user authorization,attribute revocation,and file access process design are added to the control algorithm,and a lazy re-encryption technology is combined to implement the proposed scheme.Experiment results verified that this scheme is effective and feasible in cloud storage services,and it shows forward and backward two-way confidentiality in security analysis.
Adaptive Testing Technology Based on Cognitive Diagnostic in Cybersecurity
QI Bin,WANG Yu,ZOU Hong-xia,LI Ji-xing
Computer Science. 2019, 46 (7): 102-107.  doi:10.11896/j.issn.1002-137X.2019.07.016
Abstract PDF(1350KB) ( 494 )   
References | Related Articles | Metrics
To further effectively study the cybersecurity literacy of personnel and accurately diagnose the specific level of personnel knowledge and skills,the paper developed adaptive cybersecurity testing technology based on multi-level attributes scoring of cognitive diagnosis by combining psychometrics and computer testing technology.Firstly,a hierarchical cybersecurity knowledge model is designed for better adapting to the complex knowledge structure and verifying the research.Then,the hierarchy of attribute is input based on polytomous scoring cognitive diagnosis model to implement comprehensive improvements.Accurate and efficient parameter estimation method and suitable selection strategy are proposed to improve performance.The experimental results show that the adaptive cybersecurity testing technology of multi-level attributes scoring improves the efficiency by 10.5% compared with the traditional multi-level scoring model,which provides a reference to the research of computerized adaptive testing.
Study on SQL Injection Detection Based on N-Gram
WAN Zhuo-hao,XU Dong-dong,LIANG Sheng,HUANG Bao-hua
Computer Science. 2019, 46 (7): 108-113.  doi:10.11896/j.issn.1002-137X.2019.07.017
Abstract PDF(1482KB) ( 540 )   
References | Related Articles | Metrics
SQL injection attack is the main security threat faced by Web.Aiming at the problem that SQL injection is hard to detect,this paper proposed an SQL injection detection method based on N-Gram.The method transforms the SQL statements into the feature vectors with fixed dimension based on N-Gram,and the distance is improved by changing the weights of different feature subsequences.The fuzzy distance obtained from the improved distance and chi-square distance through BP neural network is used as the distance criterion between vectors.Firstly,the average feature vector of the secure SQL statements is calculated.Then,the distances between every SQL sentence and average feature vector are calculated to determine the distance threshold.The distance between the unknown SQL statement and the average feature vector is compared with the distance threshold to judge the safety of the unknown SQL statement.The experimental results show that the proposed method can effectively improve the true positive rate and reduce the false positive rate in terms of detection compared with the feature vector directly composed by words.
Fully-outsourcing CP-ABE Scheme with Revocation in Cloud Computing
JIANG Ze-tao,HUANG Jin,HU Shuo,XU Zhi
Computer Science. 2019, 46 (7): 114-119.  doi:10.11896/j.issn.1002-137X.2019.07.018
Abstract PDF(1381KB) ( 348 )   
References | Related Articles | Metrics
In the attribute-based encryption system (ABE),users can encrypt and decrypt information through their own attributes,which is flexible and secure.Therefore,the system is widely used in secure data sharing solutions for cloud storage.However,the standard ABE mechanism has a heavy computational overhead,it restricts the practical application of ABE encryption and can’t satisfy the requirement that the data owner can dynamically and efficiently modify the user access authority.Aiming at the above problems,this paper proposed a full-outsourcing ciphertext policy attribute-based encryption scheme supporting attribute revocation.By using computational outsourcing,the complex calculations of key generation and encryption and decryption processesare handed over to cloud server to complete,redu-cing computational overhead of the key generation center (KGC) and the user’s,and realizing the fine-grained revocation of user attributes through attribute key ciphertext updating.Finally,the efficiency and function of the proposed scheme were analyzed theoretically.Theoretical analysis was conducted to evaluate efficiency and functions of the proposed scheme.The results show that the proposed scheme has good security and high system efficiency.
Study on Dynamic-graph Watermarking Based on Petri Net Coding
SU Qing,LIN Hao,HUANG Jian-feng,HE Fan,LIN Zhi-yi
Computer Science. 2019, 46 (7): 120-125.  doi:10.11896/j.issn.1002-137X.2019.07.019
Abstract PDF(1527KB) ( 330 )   
References | Related Articles | Metrics
Aiming at the problem of low data embedding rate of dynamic watermarking,this paper proposed a dynamic-graph watermarking algorithm based on Petri net coding.First,the watermark information is converted into a sequence,and then it is encoded into a running state sequence of Petri net.Finally,the code that generates the Petri net structure is embedded into the source code of the protected software.Since the Petri net transitions will produce different marks,the multiple values are expressed in the same Petri network structure,which means that the watermarking scheme has high data embedding rate and error detection ability,and can successfully resist multiple and typical attacks such as the insertion of nodes,the deletion of transitions,the deletion of places and the deletion of arcs.Finally,the feasibility and effectiveness of the algorithm were verified in the experiment,and the distortion attack test was carried out.The result shows that the dynamic map software watermark based on Petri net coding is robust,and it has a strong ability to resist distortion.
Software & Database Technology
Vulnerability Discovery Approach Based on Similarity Matching of Program Slicing
LIU Qiang,KUANG Xiao-hui,CHEN Hua,LI Xiang,LI Guang-ke
Computer Science. 2019, 46 (7): 126-132.  doi:10.11896/j.issn.1002-137X.2019.07.020
Abstract PDF(1331KB) ( 747 )   
References | Related Articles | Metrics
Vulnerability analysis method based on similarity matching is one of the most effective vulnerability analysis methods.How to reduce the false positive rate without reducing the false negative rate,and increase the efficiency,are the main goals to optimize the method.Aiming at these challenges,this paper proposed an optimization vulnerability analysis framework based on similarity matching of program slices.In the framework,the methods of code slice,feature extraction and vectorization based on vulnerable key points were studied.The core ideal of the framework is taking vulnerability semantic context slice of the vulnerability code as a reference to calculate the similarity between the slice of the tested code and the vulnerable sample slice,and determining the likelihood of a vulnerability.This paper implemented this framework and validateed it with open source projects with known vulnerabilities.Compared with the existing research,the vulnerability slice similarity framework has the ability to close describe the vulnerability context,and the vulnerability discovery method based on similarity matching is optimized by the slice technique.The proposed framework and method are verified to effectively reduce the false positive rate and false negative rate ofvulnerabi-lity discovery.
Automatic Vulnerability Detection and Test Cases Generation Method for Vulnerabilities Caused by SEH
HUANG Zhao,HUANG Shu-guang,DENG Zhao-kun,HUANG Hui
Computer Science. 2019, 46 (7): 133-138.  doi:10.11896/j.issn.1002-137X.2019.07.021
Abstract PDF(1564KB) ( 629 )   
References | Related Articles | Metrics
Structured Exception Handling (SEH),which offered by Windows operating system,is a way to handle program errors or exceptions.However,while SEH handles exception based on link,there may be corresponding vulnerabi-lities.To solve this problem,in order to improve program security,a method was proposed to generate test cases base on SEH.First,the method judge whether the program has the risk of being attacked based on the SEH.If there is a risk,the test case constraints are constructed and adjusted.Then by solve these constraints,the corresponding test cases are generated automatically.On the one hand,this method extends the current automatic test case generation pattern.And on the other hand,it can generate effective test cases even when GS protection is turned on.Finally,the effectiveness of the method is verified by experiments.
Matrix Formalization Based on Coq Record
MA Zhen-wei,CHEN Gang
Computer Science. 2019, 46 (7): 139-145.  doi:10.11896/j.issn.1002-137X.2019.07.022
Abstract PDF(1265KB) ( 563 )   
References | Related Articles | Metrics
Matrix has a wide range of applications in engineering systems,and the correctness of matrix operations has an important impact on the reliability of engineering systems.Coq is a powerful higher-order theorem prover based on the type λ calculus.Although the Coq type system can describe variable-sized dynamic data types well,there is a lack of satisfactory description mechanisms for data types such as fixed-size vectors and matrices.At present,there is no vector library or matrix library in the Coq library,so it is more difficult to use Coq to formally verify the theorem or algorithm involving the matrix.In order to solve these problems,this paper proposed a matrix implementation method based on Record type,defined a set of basic matrix functions and proved their basic properties.The verification of the flight control transformation matrix can be done easily by using the matrix types and related lemmas provided in this paper.Compared with other matrix implementation methods,this method is not only relatively simple to implement,but also simpler and more convenient to use.
Test Case Generation Method Based on Particle Swarm Optimization Algorithm
ZHANG Na,TENG Sai-na,WU Biao,BAO Xiao-an
Computer Science. 2019, 46 (7): 146-150.  doi:10.11896/j.issn.1002-137X.2019.07.023
Abstract PDF(1397KB) ( 455 )   
References | Related Articles | Metrics
In order to solve the problem of premature convergence and being easy to fall into local extremum in standard particle swarm optimization,this paper put forward a particle swarm optimization based on reverse-learning and search-again for test case generation.Firstly,the learning factor is improved by the nonlinear decreasing inertia weight function,realizing the preliminary search for the population,and the gradient descent method is used to complete the search-again of the optimal solution and the suboptimal solution.Secondly,setting taboo areas with extreme points as the center,the population diversity is improved by the reverse learning of the particles outside the taboo region.Finally,the branch distance method is used to construct fitness function to evaluate the quality of test cases.Experiment results show that the proposed method has advantages in coverage,iteration times and defect detection rate.
Artificial Intelligence
Sentiment Classification Towards Question-Answering Based on Bidirectional Attention Mechanism
SHEN Chen-lin, ZHANG Lu, WU Liang-qing, LI Shou-shan
Computer Science. 2019, 46 (7): 151-156.  doi:10.11896/j.issn.1002-137X.2019.07.024
Abstract PDF(1846KB) ( 733 )   
References | Related Articles | Metrics
Sentiment classification is a fundamental task in natural language processing,which aims at inferring the sentiment polarity of a given text.Previous studies for sentiment classification,mainly focus on sentence,document and tweet text styles.Different from these researches,this paper focused on a novel text style,i.e.,question-answering (QA) review,for sentiment classification.Firstly,a large-scale and high-quality QA review corpus was collected and built.Then,a bidirectional attention neural network for QA sentiment classification was proposed.Specifically,the question and answer text with Bi-LSTM were encoded respectively.After that,sentiment weights in question and answer text were calculated synchronously by employing bidirectional attention mechanism.Finally,the sentiment matching representation for each QA review with sentiment weights can be obtained.Empirical studies show that the proposed approach achieves a great result (75.5% in Accuracy and 61.4% in Macro F1),and has remarkable improvement compared with other baselines.
Feature Selection Algorithm Based on Rough Sets and Fruit Fly Optimization
FANG Bo,CHEN Hong-mei,WANG Sheng-wu
Computer Science. 2019, 46 (7): 157-164.  doi:10.11896/j.issn.1002-137X.2019.07.025
Abstract PDF(1288KB) ( 539 )   
References | Related Articles | Metrics
Feature selection is one of the most important data preprocessing steps in the field of pattern recognition,aiming at searching the most effective subset with the best value of evaluation function from original data set.This paper proposed a new feature selection strategy based on the rough set theory and fruit fly optimization algorithm.The novel double strategies evolutionary fruit fly optimization algorithm(DSEFOA) is used to search feature subset and execute the iterative optimization.Specially,the selected feature subset is evaluated by the fitness function constructed by attribute dependency and attribute importance simulataneously,which aims at searching important features as many as possible in feature space and further selecting effective feature subset with the most contribution to the decision.Experimental results on UCI datasets show that the proposedfeature selection method can effectively search the feature subset with the minimum information loss and achieve high classification accuracy.
Distribution Routing Problem Based on Cuckoo Search Algorithm with Directional Mutation
LIU Xiao-zhen,LIU Jing-sen
Computer Science. 2019, 46 (7): 165-171.  doi:10.11896/j.issn.1002-137X.2019.07.026
Abstract PDF(1987KB) ( 493 )   
References | Related Articles | Metrics
In order to maintain the characteristics of the Lévy flight mechanism and the preference of the random walk strategy in the basic cuckoo search algorithm,this paper proposed a cuckoo search algorithm based on directional mutation to solve the distribution routing problem.Firstly,the quick sort method is used to map each dimensional element of a real-coded individual into the city numberthus to establish the connection between the algorithm and the problem mo-del.Then the neighboring-region search method is used to determine the city access order.In other words,it is to find the neighboring cities of the current city by the distance between cities so as to enhance the convergence speed of the algorithm.Meanwhile,in the algorithm local search mechanism,the average fitness function is used to divide the algorithm into two subgroups.Then the corresponding directional mutating mechanism is adopted according to different subgroups,so that the algorithm has purpose to search and the local search ability of the algorithm is enhanced.The experimental results of solution to the test case of the standard TSP database show that the deviation rate of the proposed algorithm has been significantly reduced in each case.The deviation rate of the algorithm is smaller than that of other comparison algorithms in both the optimal value and the average value,and its solution effect is better in the path planning problem.
Text Sentiment Classification Based on Deep Forests with Enhanced Features
HAN Hui,WANG Li-ming,CHAI Yu-mei,LIU Zhen
Computer Science. 2019, 46 (7): 172-179.  doi:10.11896/j.issn.1002-137X.2019.07.027
Abstract PDF(1878KB) ( 520 )   
References | Related Articles | Metrics
To effectively realize the sentiment orientation prediction of the review text,based on the deep forest model,a deep forest algorithm BFDF (Boosting Feature of Deep Forest) was proposed to classify the text.Firstly,the binary features and emotional semantic probability features are extracted.Secondly,the evaluation objects in the binary features are clustered and made features fusion.Then,the deep forest cascade characterization learning ability is improved toavoid the gradual reduction of feature information.Finally,the AdaBoost method is integrated into the deep forest,so that the deep forest notices the importance of different features,and the improved model BFDF is obtained.The experimental results on the hotel commentary corpus demonstrate the effectiveness of the proposed method.
Distributed Convolutional Neural Networks Based on Approximate Newton-type Mothod
WANG Ya-hui, LIU Bo, YUAN Xiao-tong
Computer Science. 2019, 46 (7): 180-185.  doi:10.11896/j.issn.1002-137X.2019.07.028
Abstract PDF(1552KB) ( 681 )   
References | Related Articles | Metrics
Most machine learning problems can ultimately be attributed to optimization problems (model learning).It mainly uses mathematics methods to study the optimal ways and solutions for various problems and plays an increasingly important role in scientific computing and engineering analysis.With the rapid development of deep networks,the scale of data and parameters also increases.Although significant advances have been made in GPU hardware,network architecture and training methods in recent years,it is still difficult for a single computer to efficiently train deep network models on large data sets.The distributed approximation Newton-type method is one of the effective methods to solve this problem.It is introduced into the study of distributed neural networks.Distributed approximation Newton-type method distributes the average sample evenly across multiple computers,the amount of data to be processed by each computer is reduced,and computers communicate with each other to complete the training task.This paper proposed distributed deep learning based on Approximation Newton-type method.The DANE algorithm is used to train in the same network.As the number of GPUs increasesexponentially by 2,the training time decreases exponentially by nearly 2.This is consistent with ultimate goal,that is,on the premise of ensuring the estimation accuracy,the existing distributed framework is used to implement the approximate Newton-like algorithm,and the algorithm is used to train the neural network in a distributed manner to improve the operating efficiency.
Discovering Popular Social Location with Time Label
LIU Chang-yun,YANG Yu-di,ZHOU Li-hua,ZHAO Li-hong
Computer Science. 2019, 46 (7): 186-194.  doi:10.11896/j.issn.1002-137X.2019.07.029
Abstract PDF(4119KB) ( 473 )   
References | Related Articles | Metrics
The popular social location means the places that most people visit frequently in daily life,which is widely used in recommendation systems,targeted advertisement applications,and other fields.With the rapid development of location-based social networks (LBSN),the identification of popular social locations has become an important hot research point in spatio-temporal data mining.However,the existing research mainly focuses on mining popular social locations from LBSN,but ignores the time factor of popular social locations.Therefore,this paper proposed a new algorithm for mining popular locations with time label.The proposed algorithm first quantifies the time information in the LBSN dataset to obtain a set of frequent social locations with respect to individual users,then calculates the popularity of these locations with respect to group of users,and then identifies popular social locations that meet the requirements.This paper validated the efficiency and correctness of the algorithm by using the Foursquare Tokyo user check-in data for about 10 months.The results show that the proposed algorithm can find the popular social location with time label more accurately.
Knowledge Forgetting in Multi-agent Modal Logic System KD45n
WEN Xi-ming,FANG Liang-da,YU Quan,CHANG Liang,WANG Ju
Computer Science. 2019, 46 (7): 195-205.  doi:10.11896/j.issn.1002-137X.2019.07.030
Abstract PDF(1468KB) ( 411 )   
References | Related Articles | Metrics
Forgetting plays an important role in knowledge representation and reasoning.It has been extensively studied in many logics and been widely applied to various fields.Modal logic is suitable for representing and reasoning about the knowledge of agents.With the development of the research on multi-agent systems,the investigation on knowledge forgetting in multi-agent modal logics begins to attract attention.However,knowledge forgetting has different properties in different multi-agent modal logic systems,and it cannot be effectively computed in most cases.Therefore,it is necessary to make further investigation on the knowledge forgetting in the multi-agent modal logic system KD45n.Firstly,the knowledge forgetting in the multi-agent modal logic was defined based on model theory.Then,the main properties of knowledge forgetting in KD45n were analyzed.Finally,an effective algorithm to compute knowledge forgetting in KD45n was provided.The computing algorithm adopts the idea of knowledge compilation which is an approach for addressing the intractability of a number of artificial intelligence problems.The alternating cover disjunctive formula is introduced.By compiling the general formula into alternating cover disjunctive formula,knowledge forgetting in KD45n can be effectively computed.The time complexity is double-exponential in the length of the original formula,but polynomial in the length of the compiled one.Compared with the current non-elementary one,the algorithm is more effective and more practical.The results of research show that knowledge forgetting in KD45n satisfies many important properties and can be effectively computed.
Bio-inspired Activation Function with Strong Anti-noise Ability
MAI Ying-chao,CHEN Yun-hua,ZHANG Ling
Computer Science. 2019, 46 (7): 206-210.  doi:10.11896/j.issn.1002-137X.2019.07.031
Abstract PDF(3217KB) ( 339 )   
References | Related Articles | Metrics
Although the artificial neural network is almost comparable to the human brain in image recognition,the activation functions such as ReLU and Softplus are only highly simplified and simulated for the output response characte-ristics of biological neurons.There is still a huge gap between the artificial neural network and the human brain in many aspects,such as noise resistance,uncertainty information processing and power consumption.In this paper,based on the simulation experiments of biological neurons and their response characteristics,a strong anti-noise activation function Rand Softplus with biological authenticity was constructed by defining and calculating parameters η,which reflects the randomness of each neuron.Finally,the activation function was applied to the depth residuals network and verified by facial expression dataset.The results show that the recognition accuracy of the activation function proposed in this paper is almost equal to the current mainstream activation function when there is no noise or a small amount of noise,and when the input contains a large amount of noise,it shows good anti-noise performance.
Study on Ocean Data Anomaly Detection Algorithm Based on Improved K-means Clustering
JIANG Hua,WU Yao,WANG Xin,WANG Hui-jiao
Computer Science. 2019, 46 (7): 211-216.  doi:10.11896/j.issn.1002-137X.2019.07.032
Abstract PDF(1828KB) ( 406 )   
References | Related Articles | Metrics
Aiming at the problem of abnormal data mining in marine Argo buoy monitoring data,an anomaly detection algorithm based on distance criterion was proposed based on the improved K-means algorithm.The algorithm redefines the proximity of ocean data,blocks according to the size and distribution of the data,and adaptively selects alternative initial clustering centers.In the iterative process of the algorithm,using the distance mean of the data objects in the cluster relative to the clustering center,the global consideration is given to the data objects in the cluster according with the abnormal features to detect the anomalies.The simulation dataset and the real dataset are verified by experiments,and the comparison results show that it is superior to the contrast algorithm in clustering performance and anomaly detection.
Missing Data Prediction Algorithm Based on Sparse Bayesian Learning in Coevolving Time Series
SONG Xiao-xiang,GUO Yan,LI Ning,YU Dong-ping
Computer Science. 2019, 46 (7): 217-223.  doi:10.11896/j.issn.1002-137X.2019.07.033
Abstract PDF(1570KB) ( 517 )   
References | Related Articles | Metrics
In view of most of the existing algorithms in predicting the missing data in the coevolving time series are only feasible to be applied to the case where only a low ratio of collected data are missing,an efficient missing data prediction method was proposed in this paper.Firstly,the compressive sensing theory is applied to model the missing data prediction problem in the coevolving time series to the problem of multiple sparse vectors recovery.Secondly,the validity of the model is analyzed from two aspects:whether the sparse representation vector is sufficiently sparse and the sensing matrix satisfies the restricted isometry property.Finally,the novel recovery algorithm based on sparse Bayesian lear-ning,which can solve multiple sparse vector recovery problems by learning some support information,is designed for the characteristics of coevolving time series.Simulation results show that the proposed algorithm can effectively predict the missing data in multiple time series simultaneously.
Multi-objective Differential Evolution Algorithm with Fuzzy Adaptive Ranking-based Mutation
DONG Ming-gang,LIU Bao,JING Chao
Computer Science. 2019, 46 (7): 224-232.  doi:10.11896/j.issn.1002-137X.2019.07.034
Abstract PDF(2417KB) ( 357 )   
References | Related Articles | Metrics
In order to improve the convergence and diversity of the multi-objective differential evolution algorithm in solving multi-objective optimization problems,this paper proposed a multi-objective differential evolution algorithm with fuzzy adaptive ranking-based mutation.Firstly,the global exploration and the local exploitation are balanced by using the fuzzy system which adaptively adjust the parameters of ranking-based mutation,so the convergence rate of the algorithm is accelerated and the possibility of the algorithm falling into a local optimum is reduced.Secondly,for the sake of improving the stability and diversity of the algorithm,an initial population with good diversity is obtained through the uniform population initialization method at the beginning of the algorithm.Finally,the discarded individuals is stored by adding them to a temporary population for the final selection in the end of each iteration,therefore,the population diversity during the evolution process is improved.Simulation experiments were conducted on the seven standard test functions and three test functions with bias features.The experimental results show that compared with other four algorithms,the proposed algorithm has better convergence and diversity,and it can effectively approach to the real Pareto frontier.The effectiveness of the fuzzy adaptive ranking-based mutation strategy in the proposed algorithm is also verified by experimental comparison method.
Graphics, Image & Pattern Recognition
Lightweight SSD Network for Real-time Object Detection in Automotive Videos
ZHANG Lin-na,CHEN Jian-qiang,CHEN Xiao-ling,CEN Yi-gang,KAN Shi-chao
Computer Science. 2019, 46 (7): 233-237.  doi:10.11896/j.issn.1002-137X.2019.07.035
Abstract PDF(1631KB) ( 553 )   
References | Related Articles | Metrics
Vehicle and pedestrian detection are the most basic and widely studied subjectin the field of advanced driver-assistance systems (ADAS).At present,deep learning achieved the best detection performance for object detection.However,the computational cost of deep learning algorithms is very high and the algorithms often require high perfor-mance GPU.In the real applications,object detection algorithm is required to be integrated into the vehicle hardware system.So the requirement of the hardware for the algorithm can not be too high.Based on the SSD network,a lightweight SSD network was proposed for real-time objection.By resizing the input images into a smaller size and significantly reducing the node number of the fully connected layer,the network complexity could be reduced.In addition,the object detection speed was improved.A supervised training method based on the multi-stage loss function was proposed to solve the problems of image deformation and the updated parameters in the VGG low layers caused by the shrink of the input images.Furthermore,because the detection accuracy of vehicles and pedestrians would be declined after the reduction of calculations,a hierarchical image partition method was proposed to expand the training dataset,which was able to solve the object vanishing problem caused by the image shrink.Experimental results show that the proposed lightweight SSD network not only realizes real-time vehicle and pedestrian detection on a laptop,but also maintains the detection accuracy.Compared with other object detection algorithms,the optimized network achieves faster detection speed for the vehicles and pedestrians.Also,the power consuming of the laptop is reduced significantly while the detection accuracy is the same.
Multi-modal Medical Image Fusion Based on Joint Patch Clustering of Adaptive Dictionary Learning
ANG Li-fang, SHI Chao-yu, LIN Su-zhen, QIN Pin-le, GAO Yuan
Computer Science. 2019, 46 (7): 238-245.  doi:10.11896/j.issn.1002-137X.2019.07.036
Abstract PDF(3179KB) ( 664 )   
References | Related Articles | Metrics
In view of the poor image reconstruction quality problem caused by a large amount of redundant information existing in over complete adaptive dictionaries in medical image fusion,this paper proposed a multi-modal medical image fusion method based on joint image patch clustering and adaptive dictionary learning.First,this method calculates the Euclidean distance of image patches and reduces redundant image patches by comparing the cut-off threshold and the minimum distance of image patches.Then,it extracts the local gradient information of image patches as the clustering center by local regression weight of steering kernel (SKR),and combines the two different modal image patches with the same local gradient information for image patch clustering.On the basis of joint image patch clustering,it uses the improved K-SVD algorithm to train the clusters formed by image patch clustering to get sub-dictionaries,and merges the sub-dictionaries into an adaptive dictionary.Finally,the sparse representation coefficients can be obtained by the orthogonal matching tracking algorithm (OMP) and the adaptive dictionary,and they are fused with the rule of “2-norm max”.Through the reconstruction,this paper obtained the fused image.Compared with two methods based on multi-scale transform and six methods based on sparse representation,experimental results show that the proposed method can construct a compact and informative dictionary,and endow the fused image with higher clarity and stronger contrast to facilitate clinical diagnosis and adjuvant treatment.
Image Annotation Based on Topic Fusion and Frequent Patterns Mining
Computer Science. 2019, 46 (7): 246-251.  doi:10.11896/j.issn.1002-137X.2019.07.037
Abstract PDF(1441KB) ( 449 )   
References | Related Articles | Metrics
In order to reduce the “semantic gap”,based on the LDA topic model,an image annotation approach which uses topics fusion and association rule mining was proposed.First,to solve the problem of low correlation between visualand text information,the vector machine-based multi-category classification is introduced to obtain the category information of the image.Then,the text topic distribution of the image class is calculated by the semantic topic distribution and classification information of the text modality.The unknown image weights the text topic distribution of its class and its visual topic distribution,and calculates the initial label set using this probability model.Finally,based on the probability of initial label words,the association rules mining and inter-word correlation are used to mine the text relevance to obtain precise semantic annotation.The comparative experiments were carried out on the Corel5K image dataset.The experimental results show the effectiveness of the proposed method.
Visual Object Tracking Algorithm Based on Correlation Filters with Hierarchical Convolutional Features
LI Jian-peng, SHANG Zhen-hong, LIU Hui
Computer Science. 2019, 46 (7): 252-257.  doi:10.11896/j.issn.1002-137X.2019.07.038
Abstract PDF(3035KB) ( 403 )   
References | Related Articles | Metrics
In the visual object tracking,correlation tracking algorithm is a hot topic and has developed rapidly in recent years.Correlation filter tracking algorithms have the advantages of fast speed and good effect.However,the traditional hand-crafted features are insufficient for target discrimination,and fail to handle challenging situations such as deformation,occlusion and blurring.Recently,convolutional neural networks have achieved great success in many fields.Researchers have combined correlation tracking and convolutional features to surmount the shortcomings of hand-crafted features that lack target semantic information.In order to cope well with the above problems,this paper proposed a vi-sual object tracking algorithm based on correlation filter with hierarchical convolutional features.The proposed algorithm divides the object tracking into two steps,including position prediction and scale estimation.Multi-layer convolutional features are trained and the target position on each convolutional layer is predicted with a coarse-to-fine searching approach.The Histogram of Oriented Gradient features is used to estimate the optimal scale of target.Comprehensive experiments on 20 challenging sequences were performed to verify the proposed algorithm,and the proposed algorithm was compared with other four trackers.The results show that the proposed approach significantly improves the performance by 48.9% and 51.9% in distance precision and overlap precision respectively compared to the DSST tracker based on hand-crafted features.Moreover,the proposed method outperforms the HCFT tracker using convolutional features by 19.1% and 25.2% in distance precision and overlap precision,respectively.The proposed algorithm overcomes the shortcomings of poor representation skills of traditional manual features,and its performance is better than the correlation filtering tracking algorithms using manual features.Compared with the same correlation filtering algorithms using convolutional features,the tracking performance has also been improved.The algorithm can accurately track the target in complex situations such as occlusion and blurring.
Image Segmentation Method Based on Improved Pulse Coupled Neural Networks
WANG Yan, XU Xian-fa
Computer Science. 2019, 46 (7): 258-262.  doi:10.11896/j.issn.1002-137X.2019.07.039
Abstract PDF(2976KB) ( 390 )   
References | Related Articles | Metrics
In order to implement segmentation of images with multi-object and images with intensity inhomogeneity,this paper proposed an image segmentation method based on region growing with local coupled neural networks (RG-LPCNN).Firstly,the saliency map of the original image is extracted by using saliency detection algorithm.Secondly,the object and the background of the saliency map are coarsely segmented by histogram thresholding method,and centroid of the object is taken as the initial seed point of RG-LPCNN.In addition,convolution results of Gauss kernel and original image are used as amplification coefficients to make the dynamic threshold have local characteristics.Finally,the proposed method is utilized to segment images,implementing the segmentation of the images with multi-object and the images with intensity inhomogeneity.The RG-LPCNN algorithm is compared with other thresholding segmentationalgorithms in natural images and images with intensity inhomogeneity.The results demonstrate that the proposed method has superior segmentation effect for segmentation of the images with multi-object and the images with intensity inhomogeneity.
Enhanced Rotation Invariant LBP Algorithm and Its Application in Image Retrieval
SUN Wei, ZHAO Yu-pu
Computer Science. 2019, 46 (7): 263-267.  doi:10.11896/j.issn.1002-137X.2019.07.040
Abstract PDF(2964KB) ( 574 )   
References | Related Articles | Metrics
CBIR (Content-based image retrieval) is a hot topic in image retrieval.LBP texture features are commonly used in CBIR.When the classic LBP algorithm is applied to the image retrieval system,the retrieval efficiency is low,and it does not have the characteristics of rotation invariance.Although the rotation invariant LBP (LBPri) algorithm has the characteristics of rotation invariance,its retrieval efficiency is low.In order to improve the precision and efficiency of CBIR,based on the classical LBP algorithm,this paper proposed an enhanced rotation invariant LBP descriptor (ELBPri).Firstly,the ELBPri descriptor extracts the Harris corners from the original grayscale,and then samples the original grayscale in the center of the Harris corners.Secondly,ELBPri descriptor encodes the sampled image in rotation invariant LBP.Thirdly,the LBP histograms of each image are counted.Finally,ELBPri descriptor calculates the Eucli-dean distance between the LBP histograms of the images and sort them according to similarity.Experimental results show that compared with LBPri descriptor,the average precision of the ELBPri descriptor used in the retrieval of gene-ral texture image sets by the CBIR system is increased by 5.64%,and the average query time is shortened by 0.4ms.The average precision is increased by 5.94% when retrieving rotation texture image sets,and the average query time is shortened by 0.12ms.
Multifocus Image Fusion Based on Four Channel Non-separable Additive Wavelet
LIU Bin,CHEN Wen-jiang,XIN Jia-nan
Computer Science. 2019, 46 (7): 268-273.  doi:10.11896/j.issn.1002-137X.2019.07.041
Abstract PDF(9250KB) ( 372 )   
References | Related Articles | Metrics
In order to solve the problem that separable wavelets are not symmetry and cannot get high spatial resolution image,this paper constructed four channel 6 × 6 filter banks by using the method of non-separable wavelet transform which the dilation matrix is [2,0;0,2] and presented a multi-focus image fusion method based on this kind filter bank.The multifocus images are decomposed through additive wavelet mode by using the low-pass filter of the filter bank,and then the wavelet planes are obtained.Then the fusion rules are used to fuse the decomposed wavelet plane.The experiment results show that this method has better fusion effect in the high definition and spatial resolution.This fusion performance outperforms the Laplacian pyramid fusion method,the fusion method based on the wavelet decomposition and the fusion method based on three channel non-separable symmetric wavelet.
Geometric Features Matching with Deep Learning
LI Jian, YANG Xiang-ru, HE Bin
Computer Science. 2019, 46 (7): 274-279.  doi:10.11896/j.issn.1002-137X.2019.07.042
Abstract PDF(3843KB) ( 685 )   
References | Related Articles | Metrics
Matching local geometric features on real-world depth images is a challenging task due to the noisy and low-resolution of 3D scan captured by depth cameras like Kinect.At present,most of the solutions to this problem are based on the feature histogram method,which requires a large amount of calculation and strict requirements on the rotation of the scene.This paper proposed a method based on data-driven.From a large number of well-reconstructed RGB-D data sets,a self-supervised deep learning method is used to construct a model that can describe the geometric correspondence between three-dimensional data.Then,corresponding approximate points of two parts of the point cloud are botained by using KD-Tree-based K Nearest Neighbor (KNN) algorithm.Through removing erroneous matching point pairs using RANSAC,a relatively accurate set of feature point pairs is obtained by estimating the geometric transformation.By regis-tering and comparing the models in the Stanford University point cloud library and the David plaster model collected in the real environment,the experiments show that the proposed method can not only extract the local geometric features of unknown objects for registration,but also show good performance when dealing with large changes in spatial angle.
Concave-convex Manufacturing Features Recognition Based on 3D Reconstruction of Single View
MIAO Hui-cui, WANG Ji-hua, ZHANG Quan-ying
Computer Science. 2019, 46 (7): 280-285.  doi:10.11896/j.issn.1002-137X.2019.07.043
Abstract PDF(4728KB) ( 602 )   
References | Related Articles | Metrics
This paper proposed a new method of automatic feature recognition without relying on the CAD design model to achieve robot automatic recognition for concave-convex manufacturing features.The method takes the single image of the part as the identification clue.Firstly,the surface of the part is reconstructed by using the improved method of shape from shading.Then,the features segmentation lines which are used to segment the surface are calculated by analyzing the surface shape indexes of the reconstructed model.The surface is segmented by feature lines to obtain the corresponding classification region.Finally,based on the feature recognition rules,the feature of concave and convex manufacturing is identified effectively.This algorithm can effectively solve the problem of automatic identification of manufacturing features in the absence of CAD model,providing important method for the automatic feature recognition of the robot in the processing of incoming materials and two assembly.The validity and accuracy of the proposed method were verified by an example.
Video Stitching Algorithm Based on SIFT and Its Optimization
YANG Si-yan,HE Guo-qi,LIU Ru-yi
Computer Science. 2019, 46 (7): 286-291.  doi:10.11896/j.issn.1002-137X.2019.07.044
Abstract PDF(1853KB) ( 605 )   
References | Related Articles | Metrics
At present,a large number of video information obtained by independent video devices in small scenes cannot meet the requirements of information processing in large scenes,and the manual access by multiple devices is faced with such problems as low efficiency,information redundancy and fragmentation.This paper investigated large scene video stitching technology,carried out the feature matching of key points by using the scale-invariant feature of SIFT algorithm,and completed the image splicing by affine matrix deformation in this process.The traditional SIFT splicing algorithm is further optimized,mainly based on distance optimization algorithm to improve the effect of video splicing.In order to improve the efficiency of stitching,the techniques of key frame extraction based on SIFT feature point matching weighted optimization algorithm were accelerated in parallel.Experiments show that the proposed optimization method can extract essential information in the video more effectively,achieving better video stitching results.Visually,the intersection of the two images appeared in the results of the conventional method does not appear in the stitching results obtained by the proposed method.Moreover,the key point detection and splicing parts were accelerated and optimized respectively in the different environment.In MATLAB,the efficiency of the key point with optimized is improved by 20%,and that of splicing parts is increased by about 57%.In C++,the efficiency of the key point is improved by 14%,and that of splicing parts is increased by about 40%.
Interdiscipline & Frontier
Subway Passenger Flow Forecasting Model Based on Temporal and Spatial Characteristics
ZHANG He-jie,MA Wei-hua
Computer Science. 2019, 46 (7): 292-299.  doi:10.11896/j.issn.1002-137X.2019.07.045
Abstract PDF(2581KB) ( 1257 )   
References | Related Articles | Metrics
With the rapid development of urban rail transit,the short-term passenger flow forecast of the subway is conducive to the operation department to observe the real-time changes in passenger flow and adjust the scheduling strategy.This paper studied the temporal and spatial characteristics of passenger flow.Under the 10-minute granular time slice,there is a periodicity of passenger flow changes,and there are differences in the waveform of passenger flow in space.This paper used agglomerative hierarchical clustering algorithm to analyze the passenger flow of different stations for a week,and obtained the results of passenger flow close to the characteristics of the station.According to the results of classification,correlation analysis was performed on time slices of different types of historical passenger flow,and prediction models based on Support Vector Machine were proposed,regarding time slice sequences with strong correlation as input.Besides,a parameter optimization model for double-population firefly algorithm based on cooperative self-adaptive adjustment was proposed,in which the chaotic attraction was introduced to improve the global search ability,avoiding the initial value being trapped into a local optimum.The adaptive search step length was added to improve the convergence speed and solution accuracy.Compared with other models and optimization algorithms,the proposed model has better prediction accuracy,stability and robustness.
Cancer Classification Prediction Model Based on Correlation and Similarity
ZHANG Xue-fu, ZENG Pan, JIN Min
Computer Science. 2019, 46 (7): 300-307.  doi:10.11896/j.issn.1002-137X.2019.07.046
Abstract PDF(2285KB) ( 683 )   
References | Related Articles | Metrics
Cancer diagnosis based on empirical histopathology often has a high rate of misdiagnosis.Analyzing and studying cancer from the gene level is one of the important ways to improve the accuracy of cancer classification prediction at this stage.Biological studies have shown that the related genes of the same kind of cancer share common functional characteristics.Based on this,this paper proposes an integrated method of correlation and similarity for cancer classification prediction:First,on the one hand,statistical analysis of differential expression of genes The use of mutual information methods to perform correlation calculations on gene expression profiles.On the other hand,the similarity analysis between genes was performed on the basis of biological mechanisms,and the protein interaction network and GO data were genetically performed based on topological similarity and semantic similarity,respectively.The functional similarity calculation between the two,the combination of the two,that is,the feature set is selected by simultaneously maximizing the relevance and similarity of the target set;then the diversity of the data set is sampled by Bootstrap method,and the selected feature set in the front Based on the above,we use multiple different machine learning algorithms to train a number of differently differentiated prediction models.Finally,the multiple models are used to classify the test samples and obtain the final classification results through the decision model.The classification prediction of four differentcancerdatasets in GEO was compared with the latest research methods,and the classification accuracy on each dataset was improved by about 5%,which is up to 10% higher than that of IG/SGA methods.Increased accuracy.The experimental results show that the method of combining relevance and similarity can effectively improve the accuracy of cancer classification prediction.Selecting the obtained characteristic genes is beneficial for revealing biological significance,and the advantages of multiple algorithms can be complemented to solve the problem that the application scope of a single classification algorithm is limited.problem.
Free-conflict AGV Path Planning in Automated Terminals Based on Speed Control
ZHONG Mei-su, YANG Yong-sheng, ZHOU Ya-min
Computer Science. 2019, 46 (7): 308-314.  doi:10.11896/j.issn.1002-137X.2019.07.047
Abstract PDF(2264KB) ( 525 )   
References | Related Articles | Metrics
With the increase of labor cost,improving the efficiency of terminalshas been the key in development of ports.Automatic guided Vehicles (AGV),as the main equipmentof automatedterminalsin horizontal transportation,the problems of conflict,congestion and waiting have been more serious,which greatly reducing the operation efficiency of the terminals.This paper modeled with the minimum driving distance of AGV between the QCs and YCs,to choose the optimal path.By testing the overlapping rate and the conflict time,and using the speed control strategy,which follow the rule of first come first service,the path planning of free-conflict AGV is achieved.The simulation results show that this method can reduce the probability of AGV conflicteffectively,decrease the waiting time of QCs and YCs,improve the operationefficiency of AGV,and minimize the totalcost of operation.
Process Model Mining Method Based on Process Cut
SONG Jian,FANG Xian-wen,WANG Li-li
Computer Science. 2019, 46 (7): 315-321.  doi:10.11896/j.issn.1002-137X.2019.07.048
Abstract PDF(1793KB) ( 273 )   
References | Related Articles | Metrics
The purpose of process mining is to mine the model that meets people’s needs from the event log in the mi-ning process of business process,so as to improve and optimize the process model.In previous studies,models were mined from frequent log,and low-frequency log was deleted directly,which makes the mined model incomplete and may cause a deadlock or other abnormalities.This paper proposed a process model mining method based on process cutting.The method mines the process model from the event log and segments the event log in the form of a process cutting,not only taking into account the frequent behaviors,but also the behaviors in the low-frequency mode.In particular,aiming at the proplem that the abnormal circular structure will cause abnormality for edge structure of the flow chart,the process cutting can handle it well.The model obtained by proposed method is more comprehensive and perfect,and the validity and accuracy of the model can be improved.The evaluation index was used to optimize the constructed model and the optimal model was obtained.Finally,the effectiveness of the method was verified by concrete examples.
Traffic Flow Prediction Method Based on Spatio-Temporal Feature Mining
KONG Fan-yu, ZHOU Yu-feng, CHEN Gang
Computer Science. 2019, 46 (7): 322-326.  doi:10.11896/j.issn.1002-137X.2019.07.049
Abstract PDF(1880KB) ( 1017 )   
References | Related Articles | Metrics
Traffic forecasting methods using neural networks and big data are emerging in an endless stream,but their prediction accuracy for traffic flow is usually inaccurate.In order to solve this problem,this paper proposed a traffic flow forecasting method based on spatio-temporal feature mining.This method makes use of improving convolutional neural network(CNN) to mine the spatial features of traffic flow,and utilizes recursive neural network to mine the temporal features of traffic flow,so that it can make full use of weekly/daily periodicity and spatial-temporal characteristics of traffic flow.In addition,the method also introduces a correlation-based model that can achieve automatic learning according to the past traffic flow.Experiment results show that the proposed method has higher prediction accuracy for traffic flow compared with some novel methods.
Application of Illumination Clustering and SVM in Energy-saving Control Strategy of Street Lamps
WEN Jun-hao,WAN Yuan,ZENG Jun,WANG Xi-bin,LIANG Guan-zhong
Computer Science. 2019, 46 (7): 327-332.  doi:10.11896/j.issn.1002-137X.2019.07.050
Abstract PDF(1998KB) ( 586 )   
References | Related Articles | Metrics
The traditional street lighting industry mainly adopts the strategy of time,latitude and longitude,illumination and so on to control street lights,and the theory of illumination control has the best energy saving effect.However,due to the error of light collection,installation angle and other environmental factors,the energy saving effect has not been maximized.Aiming at this problem,this paper proposed a street lighting energy saving control strategy based on illumination clustering and support vector machine algorithm.This method collects the light intensity,time and installation angle data,and uses K-means algorithm to cluster the illumination data and changes the original light illumination data into 5 grades (from 1 to 5).Then,the data is trained by SVM,and the switching time of the street lamp is predicted without considering other external factors.The experimental results show that the algorithm can effectively reduce the power consumption of street lamps.
Vessel AIS Trajectory Online Compression Algorithm Combining Dynamic Thresholding and Global Optimization
SONG Xin,ZHU Zong-liang,GAO Yin-ping,CHANG Dao-fang
Computer Science. 2019, 46 (7): 333-338.  doi:10.11896/j.issn.1002-137X.2019.07.051
Abstract PDF(1992KB) ( 555 )   
References | Related Articles | Metrics
With the further development of vessel location technology,a large amount of vessels trajectory data have been generated with the vessel positioning identification system installed on vessels.These compresseddata can improve the efficiency of data processing and applying to a large extent.However,compressing the vessel trajectory data online may have some problems such as high compression ratio and long time consuming.Therefore, this paper proposed a two-stage online compression algorithm (DTGO) which combines dynamic threshold value with global optimization.At the first stage,the original trajectory is processed in segments,and the threshold values are dynamically updated,thus a simplified trajectory can be obtained.At the second stage,the simplified trajectory is globally optimized by a modified SPM algorithm.Through the two-stage processing,the original trajectory is segmented into several sub-trajectory segments which are processed locally.Finally,the proposed global processing algorithm is applied to optimize all sub-trajectory segments globally.The experimental results show that the algorithm not only obtains higher compression efficiency,but also achieves better compression results.