Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 46 Issue 6A, 14 June 2019
Survey on Distributed Message System
WU Can, WANG Xiao-ning, XIAO Hai-li, CAO Rong-qiang, ZHAO Yi-ning, CHI Xue-bin
Computer Science. 2019, 46 (6A): 1-5. 
Abstract PDF(1734KB) ( 1020 )   
References | RelatedCitation | Metrics
With the advent of the Big Data Era,there have been increasing demands for the high concurrent access and mass data processing of all kinds of hardware and software systems.High availability,extensibility and scalability are the main driving forces for system development.Distributed systems emerge as new age comes,providing solutions for higher performance requirements.However,as distributed systems are deployed on different computers,message communication between the systems has become an important problem.This paper provided an overview of the research progress of four popular open source distributed message systems:RabbitMQ,Kafka,ActiveMQ and RocketMQ.The architecture and performance of them are compared and analyzed,providing information and references for researchers and developers when choosing distributed message systems as an option.
Survey on Blockchain Solution for Big Data
WANG Zhen, ZHOU Ying, HUANG Cheng-dong, MIAO Quan-qiang
Computer Science. 2019, 46 (6A): 6-10. 
Abstract PDF(1708KB) ( 976 )   
References | RelatedCitation | Metrics
In the information age,data generated from all walks of life has multiplied.These data have the characteristics of enormous quantity,complicated structure and difficulty to manage.Blockchain,as a new emerging technology,compared with traditional data management methods,owns the advantages of decentralization,trustlessand data encryption.Also,it can solve the data management issues smoothly in big data application.This paper focused on situation of application blockchain technology in four parts of personal data management,digital property protection,IoT communication and medical data sharing.Moreover,some difficult problems in each big data application were discussed and some novel solutions for above problems based on blockchain technology were summarized.Finally,this paper pointed out some important development directions of future blockchain applications in big data field.
Survey on Applications of Visual Crowdsensing
ZHAI Shu-ying, LI Ru, LI Bo, HAO Shao-yang
Computer Science. 2019, 46 (6A): 11-15. 
Abstract PDF(1985KB) ( 834 )   
References | RelatedCitation | Metrics
In recent years,Visual Crowdsensing(VCS) that sensed through images and video,has become a predominant sensing paradigm of Mobile Crowdsensing(MCS),which is one of the current research hotspots.VCS requires people to capture the details of sensing objects in the real world in the form of pictures or video,which is widely used in various fields.However,there is no article summarizing the development and current situation of VCS in China.To this end,this paper summarized the latest applications of VCS,including floor plan generation,indoor scene reconstruction,outdoor scene reconstruction,event reconstruction,indoor localization,indoor navigation and disaster relief,and summarized some unique problems of VCS at present.
Survey of WCET Analysis and Prediction for Real-time Embedded Systems
WANG Ying-jie, ZHOU Kuan-jiu, LI Ming-chu
Computer Science. 2019, 46 (6A): 16-22. 
Abstract PDF(1790KB) ( 882 )   
References | RelatedCitation | Metrics
For the safe operation of real-time embedded system,verifying whether the system meet the duration limitation is necessary.That means the task must be completed before the deadline,otherwise the real-time system will fail.At present,the worst-case execution time (WCET) is one of the important indicators to measure the real-time embedded system.This paper firstly introduces WCET analysis itself and main methods for this analysis.Secondly,the main problems of WCET analysis under complex processor architecture on current multi-core platforms are investigated.Thirdly,the research progress in WCET analysis is discussed for timing analysis,micro-system structure analysis and multi-core and multi-task scheduling strategy based on the current problems.Finally,adaptive real-time DVFS algorithm based on deep learning is proposed,which can perform dynamic voltage and frequency adjustment (DVFS),achieve the target for energy saving,and dynamically correct the WCET value of the program to provide guidance for the analysis and prediction of WCET in future embedded systems.
Overview and Difficulties Analysis on Credibility Assessment of Simulation Models
YANG Xiao-jun, XU Zhong-fu, ZHANG Xing, SUN Dan-hui
Computer Science. 2019, 46 (6A): 23-29. 
Abstract PDF(2348KB) ( 1286 )   
References | RelatedCitation | Metrics
Along with the developing of computing and modeling and simulation techniques,simulation models have been applied in military,social and economic areas widely in these years.Meanwhile,the capabilities of simulation mo-dels become more and more strong,and simulation systems become more and more complex.Thus,the credibility assessment of simulation models is confronted with many challenges and becomes the crucial difficult problem nowadays.This paper reviewed principal works,especially current researches on credibility assessment of simulation models.Firstly,the concept of credibility and its relationship with Verification,Validation and Accreditation(VV&A) were introduced.Secondly,the research history,paradigm and life cycle of credibility assessment of simulation models were summarized,and important methods and techniques were analyzed and classified.Finally,eight challenges of credibility assessment of simulation models were presented.The overview and difficulties analysis provide significant references for constructing the framework and innovating theories and methods for the credibility assessment of simulation models.
Status and Development of Gait Recognition
JIN Kun, CHEN Shao-chang
Computer Science. 2019, 46 (6A): 30-34. 
Abstract PDF(1882KB) ( 1717 )   
References | RelatedCitation | Metrics
Gait recognition is a kind of biometrics technology,which aims to identify people by their walking posture.Compared with other biometrics technology,gait recognition has the advantages of non-contact,long distance and easy to disguise.Since gait recognition was proposed in 1994,it has developed rapidly.With the acceleration of the algorithm,gait recognition has more advantages than image recognition in the field of intelligent video surveillance.This paper summarized the principle of gait recognition and introduced its application in various stages of recognition,and then summarized and sorted out the database,put forward the methods and prospects of multi-feature fusion recognition,and looked forward to the future development direction of identity recognition.The generation and application of bioradar will provide more possibilities for gait recognition.
Review on Urban Air Quality Perception Methods
WANG Peng-yue, GUO Mao-zu, ZHAO Ling-ling, ZHANG Yu1,4
Computer Science. 2019, 46 (6A): 35-40. 
Abstract PDF(1883KB) ( 1197 )   
References | RelatedCitation | Metrics
Urban air quality information is especially important for controlling air pollution and protecting public health.According to whether the sensor position changes,urbanair quality sensing methods can be divided into two methods:static perception methods and dynamic perception methods.The data acquisition of the static sensing method is based on air quality monitoring stations,satellite remote sensing and fixed position sensors.Then,the static sensing method is further divided into low-cost static sensing method and high-cost static sensing method.The dynamic sensing method can be divided into participatory method and non-participating method according to whether the participant is the perceptual center.With the development of sensing technology and computing ability,the fusion of multi-source hete-rogeneous urban data,such as meteorological data and traffic data,can further improve the accuracy of perception.This paper firstly summarized current air quality sensing methods,then classified the sensing framework and data processing methods of various methods,and finally discussed the problems and challenges.
Overview of Preventing Candid Photos Methods for Electronic Screens
WANG Xiao-yuan, ZHANG Wen-tao
Computer Science. 2019, 46 (6A): 41-44. 
Abstract PDF(1712KB) ( 2110 )   
References | RelatedCitation | Metrics
Nowadays,the performance of mobile phones and other devices has become more and more powerful.It brings convenience and entertainment to people’s life.At the same time,it also reduces the cost of crime to steal business secrets even national secrets.The convenient and covert ways of stealing secrets have brought great challenges to information security.According to existing academic research and business solutions,this paper introduced three kinds of methods:hiding display information,detection camera and screen watermarking.Then it analyzed the characteristics,advantages and limitations of various methods from the angle of information security protection.Finally,a new solution based on computer vision was proposed to avoid the limitations.
Study and Application of Industrial Big Data in Production Management and Control
ZHAO Ying, HOU Jun-jie, YU Cheng-long, XU Hao, ZHANG Wei
Computer Science. 2019, 46 (6A): 45-51. 
Abstract PDF(2906KB) ( 1385 )   
References | RelatedCitation | Metrics
To promote the application of industrial big data in smart manufacturing,related research was reviewed.According to production management and control needs,this paper started from the connotation and architecture of industrial big data,and analyzed the key technologies of industrial big data from three levels:data dynamic perception and collection,data unified storage and management,data analysis and decision support.Then,this paper introduced the application of industrial big data in quality management,fault diagnosis and forecasting,supply chain optimization and other typical scenarios.And based on a comprehensive analysis of its development status,this paper anticipated the future application trend of industrial big data.
Intelligent Computing
Method of Predicting Performance of Storage System Based on Improved Artificial Neural Network
Computer Science. 2019, 46 (6A): 52-55. 
Abstract PDF(1758KB) ( 460 )   
References | RelatedCitation | Metrics
Measuring and evaluating the performance of network storage system is one of the key problems to users and corporations.For the strong nonlinear mapping function of the BP-ANN,a new improved algorithm for network I/O performance prediction was proposed by improved BP-ANN,and the new algorithm includes two aspects.Firstly,Mar-kov Chain is used to forecast and update the output of output layer.Secondly,the artificial bee colony algorithm is used to optimize the weights when the probability of algorithm selection reaches a certain value.The implementation process of evaluation model was simulated,and the results were compared with BP-ANN.The experimental results show that the presented approach can significantly improve the solution accuracy and convergence speed of evaluating the performance of network storage system almost without increasing the running time.
Automatic Extraction of Diversity Keyphrase by Utilizing Integer Liner Programming
LI Shan-shan, CHEN Li, TANG Yu-ting, WANG Yi-lin, YU Zhong-hua
Computer Science. 2019, 46 (6A): 56-59. 
Abstract PDF(1555KB) ( 371 )   
References | RelatedCitation | Metrics
Keyphrases are the concise summary of text information,which can represent the main topics and the core ideas of texts.And the automatic extraction of key phrases is one of the important tasks for natural language processing and information retrieval.Aiming at the existing problem caused by semantic over-generation on candidate phrases with unsupervised method,this paper proposed an algorithm for automaticextraction of keyphrase by using integer linear programming (ILP) and similarity of candidate phrases,in which candidate phrases with high sematic similarity are punished for maximizing the object function to obtain diversified keyphrases.TextRand and TFIDF algorithms are applied in the proposed method to create candidate phrases based on two different corpus sets and the proposedoptimization algorithm is utilized to optimize the weight scores of candidate phrases.Finally,the results of the proposed optimization algorithm is compared with the ones of baseline methods,and the experimental results show that the proposed method can solve the semantic over-generation problem effectively by punishing candidate phrases with high semantic similarity.Moreover,the optimization algorithm can obtain more diverse keyphrases and the optimized results of P,R and F value outperform the ones of baseline methods.
Prediction Model of P2P Trading Volume Based on Investor Sentiment
ZHANG Shuai, FU Xiang-ling, HOU Yi
Computer Science. 2019, 46 (6A): 60-65. 
Abstract PDF(3618KB) ( 436 )   
References | RelatedCitation | Metrics
There are many kinds of studies on the trading volume of Peer-to-Peer market.However,the common me-thods only take investor and market information as characteristics,and donot consider the relationship between investor sentiment changes and the market.The research shows that investors’ sentiments have a profound impact on their investment decisions and behaviors.Therefore,according to the financial theory,this paper proposed a method to predict the trading volume of Peer-to-Peer market based on investor’s sentimental tendency.Firstly,the comments of WangDaiZhiJia is taken as the research object and applied TextCNN model for sentiment classification.The time series of sentiment tendency is obtained,so as to achieve the purpose of measuring the trend of investor sentiment.Secondly,it verifies the relationship between investor’s emotion time series and trading volume index through Granger causality test and Pearson correlation coefficient.Finally,a predictive model based on long short term memory network is employed to predict the trading volume of the Peer-to-Peer market.The experimental results show that by adding sentimental features to the trading volume prediction model,the predictive ability of the model is improved significantly.
Study of Urban Environmental Risk Prediction Algorithm Based on SOM-PNN
LIU Na, LEI Ming
Computer Science. 2019, 46 (6A): 66-70. 
Abstract PDF(2205KB) ( 468 )   
References | RelatedCitation | Metrics
Based on the structure and function of biological neural network,artificial neural network(ANN) can perform distributed storage and parallel processing of data.Self-organizing feature mapping model(SOM) and probabilistic neural network(PNN) are commonly used models in ANN algorithms.Based on the respective characteristics of two mo-dels,the two were connected in series.SOM uses a two-dimensional topology consisting of two layers of neurons to obtain and predict the data.The PNN model converts the output of the SOM and directly outputs the final classification result of the model.The algorithm based on this model can improve the operation speed and remove the interference of noise samples,which greatly improves the accuracy of the model.At present,the Beijing-Tianjin-Hebei regional environment has fallen into a higher risk state.Taking the regional SO2 concentration prediction as an example,the SOM-PNN model is used to obtain the visual output of the influence mechanism of urban factors on SO2 concentration and the high-precision prediction of regional environment,which further verifies the feasibility and effectiveness of the proposed model.
Habitability Prediction of Exoplanets Based on GBRT Algorithm
ZHU Wei-jun, WANG Xin, ZHONG Ying-hui, FAN Yong-wen, CHEN Yong-hua
Computer Science. 2019, 46 (6A): 71-73. 
Abstract PDF(2032KB) ( 350 )   
References | RelatedCitation | Metrics
The habitability of exoplanets is a hot research topic in the field of the exploration of the universe in recent years.The Machine Learning(ML) technique provides a viable means for classifying exoplanets according to their habita-bility.However,the existing ML-based approaches of habitability classification have some serious shortcomings and li-mitations.To this end,this paper provided a novel method for predicting the habitability of exopla-net based on Gra-dient Boosted Regression Trees(GBRT).First,the physical and astronomical data on the potentially habitable exopla-nets and the inhabitable ones are employed to train by algorithm GBRT.Then,the trained model is used to predict the habitability of the exoplanets in our test set.The simulated experimental results show that the predictive accuracy of the new method is as high as 100%.
Movie Review Professionalism Classification Using LSTM and Features Fusion
WU Fan, LI Shou-shan, ZHOU Guo-dong
Computer Science. 2019, 46 (6A): 74-79. 
Abstract PDF(1852KB) ( 674 )   
References | RelatedCitation | Metrics
Movie Reviews on social networks usually include professional reviews written by professional critics,as well as non-professional reviews written by ordinary audience,and it is of great value to distinguish whether online film reviews are professional reviews for film quality evaluation.Due to the fact that film review is a short text book with irregular words and sparse features,the traditional text feature selection method and traditional classification model cannot fully apply to the classification of film review’s professional level.Therefore,the paper mainly studied movie review professionalism classification based on neural network model,that is judging whether it is professional review or non-professional review.The representation of different features is learned through neural network-based LSTM model,including word-based representation,part-of-speech representation,and representation based on dependencies,and valid text features are learned and captured by fusing different feature representations to help review professionalism classification.The method was experimented on the Rotten Tomatoes dataset of the famous American film review website.The experimental results show that the classification accuracy rate of the model combining part-of-speech and dependency is 88.30%,which is 3.66% higher than the benchmark model only using word features.This shows that the method of introducing part-of-speech features and dependency features into the model can effectively improve the effectiveness of professional classification of reviews.
Multi-layer Screening Based Evolution Algorithm for De Novo Protein Structure Prediction
LI Zhang-wei, HAO Xiao-hu, ZHANG Gui-jun
Computer Science. 2019, 46 (6A): 80-84. 
Abstract PDF(3080KB) ( 446 )   
References | RelatedCitation | Metrics
Aiming at the diversity of sampling in high-dimensional protein conformational space,a multi-layer screening based evolution algorithm for de novo protein structure prediction (MlISEA),was proposed.On the basis of the evolution algorithm framework,the knowledge-based Rosetta coarse-grained energy model is employed as the objective function,to reduce the optimal variable dimension of protein conformational space.Taking 9-mer and 3-mer fragment assembly technique as two different kinds of mutation strategies,the diversity of the individuals in the same generation can be increased.In conjunction,multi-layer individual screening method is designed for further improving the diversity of the individuals in different generations.Then,Monte Carlo algorithm is adopted to enhance the performance for each individual to get the local optimal solution.Finally,the global resolution and different local solutions can be obtained.Test results of 10 target proteins show that the proposed method can effectively improve the diversity of sampling,the prediction conformations with TMscore greater than 0.5 can be obtained for further refinement.
Chaotic Fireworks Algorithm for Solving Travelling Salesman Problem
CAI Yan-guang, CHEN Hou-ren, QI Yuan-hang
Computer Science. 2019, 46 (6A): 85-88. 
Abstract PDF(1555KB) ( 548 )   
References | RelatedCitation | Metrics
Travelling salesman problem (TSP) is a classic combinatorial optimization problem,which is a typical NP hard problem and has important research value.This paper presented a chaotic fireworks algorithm to solve the TSP problem.The algorithm uses LOV to define the discrete domain and adds chaos optimization strategy to enhance search capability of the algorithm.In this paper,four parametric experiments were designed to analyze the influence of the main parameters on CFWA and determine the optimal parameter setting.The comparison experiment shows that the chaotic fireworks algorithm has better convergence and stability than the comparison algorithm.
Bat Optimization Algorithm Based on Dynamically Adaptive Weight and Cauchy Mutation
ZHAO Qing-jie, LI Jie, YU Jun-yang, JI Hong-yuan
Computer Science. 2019, 46 (6A): 89-92. 
Abstract PDF(1874KB) ( 346 )   
References | RelatedCitation | Metrics
In order to speed up the convergence of bat algorithm and improve the accuracy of optimization,this paper proposed a bat optimization algorithm based on dynamic adaptive weight and Cauchy mutation.The algorithm adds dynamic adaptive weight to the speed formula and dynamically adjusts the size of the adaptive weight to speed up the convergence of the algorithm.In addition,the Cauchy inverse cumulative distribution function method can effectively improve the global search ability of bat algorithm and avoid falling into local optimum.The simulation results of 12 typical test functions show that the improved algorithm has better performance,faster convergence speed and higher optimization accuracy.
Emotion Classification Algorithm Based on Emotion-specific Word Embedding
ZHANG Lu, SHEN Chen-lin, LI Shou-shan
Computer Science. 2019, 46 (6A): 93-97. 
References | RelatedCitation | Metrics
Emotion analysis is a hot research issue in the field of NLP,and it infersthe feelings of individuals through analyzing the text they have published.Emotion classification is a fundamental task in emotion analysis,which aims to determine the emotion categories in a piece of text.The representation of words is a critical prerequisite for emotion classification.Many intuitive choices of learning word embedding are available,but these word embedding algorithms typically model the syntactic context of words but ignore the emotion information relevant to words.As a result,words with opposite emotion but similar syntactic context tend to be represented as close vectors.To address the problem,this paper proposeda a heterogeneous network composed of two basic networks,i.e.,document-word network and emoticon-word network to learn emotion-specific word embedding .Finally,an LSTM network was trained on the labeled data.Empirical studies demonstrate the effectiveness of the proposed approach to learn emotion-specific word embedding.
Improved Genetic Algorithm for Subgraph Isomorphism Problem
XIANG Ying-zhuo, WEI Qiang, YOU Ling, SHI Hao
Computer Science. 2019, 46 (6A): 98-101. 
Abstract PDF(1770KB) ( 375 )   
References | RelatedCitation | Metrics
Subgraph isomorphism plays an important role in computer vision,artificial intelligence and bio-chemical engineering.This paper focused on the subgraph isomorphism (SI) problem and proposed a novel method based on the genetic algorithm to solve it.The sub-generation producing method is improved during the crossover and evolution process.Moreover,a new fitness function was presented to measure the fitness of the population.The new algorithm is more fast to get convergence and can find the optimal solutions with higher probability.Experiments show that the proposed improved algorithm outperforms other traditional methods by processing large graphs.
Intuitionistic Fuzzy Group Decision Making Information Aggregation Method Based on D-S Evidence Theory
ZANG Han-lin, LI Yan-ling
Computer Science. 2019, 46 (6A): 102-105. 
Abstract PDF(1556KB) ( 370 )   
References | RelatedCitation | Metrics
When dealing with the intuitionistic fuzzy multi-attribute group decision-making problem,the information aggregation can be completed according to the D-S evidence theory.The weights are determined by using the intuitionistic fuzzy entropy and fuzzy preference relationship.Weighted-evidence fusion method is used to obtain the expert’s fusion evidence for the solution set.In the expert information aggregation,the Euclidean evidence distance is used to solve the degree of conflict between the evidences,and the expert weights are obtained.Evidence information of group experts on program sets are corrected and integrated.Finally,combined with examples,it is proved that the proposed method has high practical value.
Differential Evolution Algorithm with Stage-based Strategy Adaption
NI Hong-jie, PENG Chun-xiang, ZHOU Xiao-gen, YU Li
Computer Science. 2019, 46 (6A): 106-110. 
Abstract PDF(1872KB) ( 300 )   
References | RelatedCitation | Metrics
To solve the problem of the selection for the mutation strategy in differential evolution algorithm,this paper proposed a differential evolution with stage-based strategy adaption.Firstly,the crowding degree of the population is measured by the average distance between each individual and the best individual,and the evolution stage is estimated.Then,the population is divided into multiple sub-populations,and a pool mutation strategy based on the coevolution of sub-populations is designed for different stages.Finally,according to the historical success information of each strategy,a suitable strategy is adaptively selected from the corresponding strategy pool to balance the exploration and exploitation.Experimental results on 12 classical test functions show that the proposed algorithm is superior to the mainstream algorithm in terms of the computational cost,success rate,quality of the solution,and scalability.
Study on Named Entity Recognition Model Based on Attention Mechanism——Taking Military Text as Example
SHAN Yi-dong, WANG Heng-jun, HUANG He, YAN Qian
Computer Science. 2019, 46 (6A): 111-114. 
Abstract PDF(1642KB) ( 692 )   
References | RelatedCitation | Metrics
Due to the insufficiency of extracting features by bi-directional long-short term memory network model,the character vector and the word vector are used as the input and the attention mechanism is used to extract the features that are useful for the current output.In this paper,a new named entity recognition model was constructed by constraining the final output tag sequence with the Viterbi algorithm.The experimental results show that the model has achieved a better recognition rate in the identification of military texts.
Distribution Attribute Reduction Based on Improved Discernibility Information Tree in Inconsistent System
LONG Bing-han, XU Wei-hua, ZHANG Xiao-yan
Computer Science. 2019, 46 (6A): 115-119. 
Abstract PDF(1716KB) ( 253 )   
References | RelatedCitation | Metrics
Under the background of inconsistent systems,this paper studied how to effectively solve the problem of distributed attribute reduction.By using the judgment theorem of distributed coordination set,a new method of distributed attribute reduction under the background of inconsistent system was proposed.Inspired by difference matrix and discernibility information tree,in this method,an algorithm is constructed which uses the improved discernibility information tree to reduce the distribution attribute.The information tree realizes the compression and storage of non-empty ele-ments and redundant information in the discernibility matrix,and greatly simplifies the time complexity and the space complexity.
Attribute Reduction Method Based on Sequential Three-way Decisions in Dynamic Information Systems
LI Yan, ZHANG Li, CHEN Jun-fen
Computer Science. 2019, 46 (6A): 120-123. 
Abstract PDF(1557KB) ( 246 )   
References | RelatedCitation | Metrics
Multi-criteria classification problem refers to a type of classification problem which has ordered-valued conditional attributes.The dominance-equivalence relation is used to describe information systems of this kind of problems.However,many real-world information systems are dynamic,attribute reductions need to be often updated as the most important knowledge in decision making.In order to deal with the dynamic information system with preference relations and provide an efficient method for updating attribute reductions for multi-criterion decision-making problems,this paper established an efficient knowledge updating method based on sequential three-way decisions under dominance-equivalence relations.Multi-granules are combined to form dynamic granular sequence,the attribute reduction are updated through reusing current information when the object set or attribute set change,thus saving the cost of attribute reduction process.Several UCI datasets are selected for experiments.The results show that the proposed method can reduce the time consumption noticeably when guarantee the quality of the attribute reduction.
Efficient Dynamic Self-adaptive Differential Evolution Algorithm
XIAO Peng, ZOU De-xuan, ZHANG Qiang
Computer Science. 2019, 46 (6A): 124-132. 
Abstract PDF(3680KB) ( 437 )   
References | RelatedCitation | Metrics
This paper proposed an efficient dynamic self-adaptive differential evolution (EDSDE) algorithm based on the characteristics of premature convergence,low convergence accuracy.The algorithm starts with mutation factor,mutation strategy and crossover factor.It sets the mutation factor to a linear decreasing function,incorporates an amplitude coefficient into the base vector to balance the global search and the local search,and sets the crossover factor to a dynamic self-adaptive function that is constantly oscillated within [0,1] and updated every 50 generations.The simulation results show that EDSDE can obtain better optimization results and exhibit more desirable performance then the other algorithms.
Alternate Random Search Algorithm of Objective Penalty Function for Compressed Sensing Problem
JIANG Min, MENG Zhi-qing, SHEN Rui
Computer Science. 2019, 46 (6A): 133-137. 
Abstract PDF(1795KB) ( 260 )   
References | RelatedCitation | Metrics
The compressed sensing optimization problem was defined as a biconvex optimization problem.It is proved that the optimal solution of the equivalent biconvex optimization problem is also the optimal solution of the compressed sensing optimization problem.Then a smooth objective penalty function and its corresponding alternating sub-problem were defined.An iterative algorithm for solving the sub-problem was given.The convergence theorem of alternating algorithm was proved theoretically.The expression of the optimal solution for compression perception was derived.An alternating random search algorithm was designed,which is effective for a specific type of compressed sensing problem.This method provides a new design idea for studying and solving the actual compressed sensing problem.
RBF Artificial Intelligence Control Strategy for Gas Pressure Regulating Application
HE Jin, ZHONG Yuan-chang, SUN Li-li, ZHANG Xiao-fan
Computer Science. 2019, 46 (6A): 138-141. 
Abstract PDF(2172KB) ( 335 )   
References | RelatedCitation | Metrics
In order to overcome the shortcomings of poor accuracy and reliability of the existing medium and low voltage regulator stations,a RBF neural network control strategy for gas regulator application was proposed.The intelligent gas regulator uses the reduced order approximation method of high-order system to obtain a simplified mathematical model of electric gas regulator system.Then,according to the characteristics of non-linearity and uncertainty of the regulator system,it makes full use of the good approximation effect of RBF neural network for the non-linear function to realize the self-tuning of PID parameters.The performance and function of the voltage regulator are tested based on MSP430 MCU development board.The test results show that compared with the traditional PID control algorithm,the improved algorithm reduces the adjustment time by about 10% and the overshoot by about 6%,and the anti-interference performance is superior.The voltage regulator can realize data acquisition,voltage regulation,serial communication and safety alarm functions.
Text Keyword Extraction Method Based on Weighted TextRank
Computer Science. 2019, 46 (6A): 142-145. 
Abstract PDF(1810KB) ( 1076 )   
References | RelatedCitation | Metrics
To improve the accuracy of keyword extraction,a text keyword extraction me-thod was proposed.This methodcombines the influence factors such as word frequency,word length,word position and word length,proposes the weight formula of candidate keywords.Then it obtains the relative optimal weight coefficient in the weight formula by experiment,applies the weight formula to the candidate keyword scoring formula of TextRank algorithm,and extracts the accuracy of text keywords.The accuracy,recall and F value of OPW-TextRank algorithm and TextRank algorithm in single text keyword extraction were compared through the experiment.The results show that the accuracy of OPW-TextRank algorithm is higher than that of TextRank algorithm when the window size is 6.It is useful in natural language processing keyword system based on text keyword extraction.
Pattern Recognition & Image Processing
AlexNet Model and Adaptive Contrast Enhancement Based UltrasoundImaging Classification
CHEN Si-wen, LIU Yu-jiang, LIU Dong, SU Chen, ZHAO Di, QIAN Lin-xue, ZHANG Pei-heng
Computer Science. 2019, 46 (6A): 146-152. 
Abstract PDF(4332KB) ( 583 )   
References | RelatedCitation | Metrics
Breast cancer is one of the most common malignant tumors of women.The incidence of breast cancer is increasing year by year,which seriously threatens the health of the patients.In recent years,more and more attention has been paid to how to replace the traditional needle biopsy in the diagnosis of benign and malignant breast nodules.Medical research shows that significant differences exist on the edge of benign and malignant nodules.So the algorithm of boundary enhancement treatment provides a new way for the study of judgment of benign and malignant breast cancer.The database was constructed with the support of Beijing Friendship Hospital which is affiliated to Capital Medical University.The images are expanded based on the comparison of 5 kinds of boundary enhancement (ACE) algorithm.AlexNet network model is used which is excellent in image classification.The data processed by linear,nonlinear contrast stretching,histogram equalization,histogram thresholding and adaptive contrast enhancement algorithm are applied to the AlexNet model.The influence of the five algorithms on the accuracy of AlexNet model is compared,and a preprocessing algorithm,which is more suitable for ultrasonic images of breast nodules,is obtained.The total number of images in the expanded data set is more than ten thousand,of which the training set is 80%,and the verification set and the test set account for 10% each.Finally,the sensitivity,specificity and accuracy parameters are calculated by plotting the ROC curve,and the test results are evaluated.The better test results are obtained.
Realization of “Uncontrolled” Object Recognition Algorithm Based on Mobile Terminal
PANG Yu, LIU Ping, LEI Yin-jie
Computer Science. 2019, 46 (6A): 153-157. 
Abstract PDF(3976KB) ( 338 )   
References | RelatedCitation | Metrics
Aiming at the problems that the existing object recognition methods are easy to be influenced by “uncontrolled” factors such as illumination,angle,size and complex environment,and have the problems such as low recognition rate,poor real-time performance and large memory consumption,this paper proposed a new object recognition algorithm,on which the object recognition system based on mobile terminal was realized.This method first employs particle filter algorithm to track the detection range by adding windows,and then applies the watershed segmentation algorithm to segment objects,then uses the HOG(Histogram of Oriented Gradient) algorithm to extract object features.Finally,the random forest algorithm is utilized to recognize objects.The experimental results show that this method can be used to identify the mobile terminal quickly and accurately in an “uncontrolled” environment.
K-means Image Segmentation Algorithm Based on Weighted Quality Evaluation Function
LIU Chang-qi, SHAO Kun, HUO Xing, FAN Dong-yang, TAN Jie-qing
Computer Science. 2019, 46 (6A): 158-160. 
Abstract PDF(1916KB) ( 338 )   
References | RelatedCitation | Metrics
K-means clustering algorithm is a common way in image segmentation.As an unsupervised learning method,it can find the association rules from characteristics of grey levels,thus has a great capability of segmentation.However,due to its single classification basis and uncertainty of the initial cluster centers,this algorithm still has some defects in image segmentation.Aiming at this problem,this paper proposed a modified K-means algorithm for image segmentation.The new algorithm uses the improved iterative algorithm based on information entropy to select thresholds as the initial K-means clustering centers,and then puts forward a new weighted quality evaluation function for K-means algorithm to get better segmentation thresholds.The experimental results show that the improved algorithm has higher accuracy and stability than OTSU algorithm and traditional K-means algorithm in image segmentation.
Single Image Depth Estimation Algorithm Based on SFS and Binocular Model
ZHAO Zi-yang, JIANG Mu-rong, HUANG Ya-qun, HAO Jian-yu, ZENG Ke
Computer Science. 2019, 46 (6A): 161-164. 
Abstract PDF(2572KB) ( 472 )   
References | RelatedCitation | Metrics
Obtaining depth information from 2D images is a hot topic in the field of computer vision.Classical binocular vision methods require camera parameters and multiple images of the same scene.Insufficient visual parameters can easily lead to errors in calculation,while a single image can only rely on its own geometric information to get the image depth.This paper used the geometric information of the image and the binocular vision model to get the depth value of the object in a single ordinary two-dimensional image with unknown camera parameters.The experimental results show that the image depth obtained by the proposed method can relatively accurately reflect the real information of the scene,which is consistent with the actual observation results.
Static Gesture Recognition Based on Hybrid Convolution Neural Network
SHI Yu-xin, DENG Hong-min, GUO Wei-lin
Computer Science. 2019, 46 (6A): 165-168. 
Abstract PDF(1885KB) ( 541 )   
References | RelatedCitation | Metrics
Static gesture recognition has caught special attention for its great application value in man-machine interaction.At the same time,the accuracy of gesture recognition is affected by the complexity of gesture background and the diversity of gesture morphology in a certain extent.In order to improve the accuracy of gesture recognition,a method was proposed,which is based on convolutional neural network(CNN) and random forest(RF).Firstly,the image of the static gesture is segmented,then the feature extraction function of convolution network is used to extract feature vectors,and finally the random forest classifier is used to classify these feature vectors.On the one hand,the CNN has the ability of layered learning and is able to collect more representative information on the picture.On the other hand,random forest shows randomness for samples and feature selection,meanwhile,it can be avoided easily that the results of each decision tree is averaged over fitting problem.This paper verified by using the static gesture data set,and the experimental results show that the proposed method can effectively identify the static gestures and achieve an average recognition rate of 94.56%.The method proposed in this paper was further compared with principal component analysis(PCA) and partial binary(LBP).The experimental results show that the classification and recognition effect with feature extraction by CNN is better than PCA and LBP.The recognition rate is 2.44% higher than that of PCA-RF methodand 1.74% higher than that of LBP-RF method.Finally,the recognition rate of the proposed method reaches 97.9%,which is higher than the other two traditional feature extraction methods.
Texture Detail Preserving Image Interpolation Algorithm
SONG Gang, DU Hong-wei, WANG Ping, LIU Xin-xin, HAN Hui-jian
Computer Science. 2019, 46 (6A): 169-176. 
Abstract PDF(5549KB) ( 712 )   
References | RelatedCitation | Metrics
It is difficult to maintain the image texture details in image interpolation technology.To overcome this problem,this paper proposed a new method of image interpolation based on rational interpolation function.Firstly,image is automatically divided into texture regions and smooth regions using the isoline method.Secondly,a new type of C2-continuous rational interpolation function is constructed,which is an organic unity of polynomial models and rational mo-dels.According to regional features of the image,the texture region is interpolated by rational model and the smooth region is interpolated by polynomial model.Finally,based on the human visual system,this paper proposed a multi-scale approach to boost details of interpolated image.Experimental results show that this algorithm not only has lower time complexity,but also can preserve image detail,and obtain high objective evaluation data.
Target Detection in Colorful Imaging Sonar Based on Multi-feature Fusion
WANG Xiao, ZOU Ze-wei, LI Bo-bo, WANG Jing
Computer Science. 2019, 46 (6A): 177-181. 
Abstract PDF(2566KB) ( 517 )   
References | RelatedCitation | Metrics
With the in-depth development of underwater work in rivers,lakes and offshore near-shore shallow water areas,diver’s underwater engineering construction such as underwater salvage,positioning and exploration becomes significant.The TKIS-I helmet-mounted colorful imaging sonar developed by this lab has been acknowledged by Navigation and Warranty Department of Chinese Navy.Currently,there are more than two dozens of TKIS-I in service.However,under the complex underwater environment,divers usually perform underwater operations with great risks,so it is expected to use underwater robots to achieve automatic underwater target detection in the future.Aiming at the feature of sonar image,this paper adopted feature extraction methods of HSV color space,Histogram of Oriented Gradient(HOG) and Local Binary Pattern(LBP) respectively in the aspects of color,shape and texture.Besides,the paper improved multi-feature fusion method and used optimized support vector machine(SVM) for classification,aiming to quickly detect underwater targets to lay the foundation for robots’ underwater automatic target detection in the future.
Application of Deep Learning in Driver’s Safety Belt Detection
HUO Xing, FEI Zhi-wei, ZHAO Feng, SHAO Kun
Computer Science. 2019, 46 (6A): 182-187. 
Abstract PDF(3757KB) ( 819 )   
References | RelatedCitation | Metrics
Seat belts are one of the most effective measures to protect safety of drivers which the law stipulates that drivers must wear seat belts when driving the vehicle.At present,the identification of seat belt during driving is mainly based on manual screening.However,the traditional detection methods can not meet the needs of traffic management as the rapid increase of the number of vehicles.And the automatic processing of seat belt detection has become one of the urgent problems in the current traffic system.In this paper,a recognition system for seat belts of drivers is designed.First,the vehicle window is roughly positioned by the geometric relationship between the license plate and the window.Second,Hough transform is used to detect the upper and lower edges of the window and the integral projection transformation is used to detect the left and right borders of the window.The detected pictures will be cut into half to get the driver rough position.Finally,the seat belt identification analysis based on deep convolutional neural network is conducted which adds spatial transform layer.Experiments are carried out on different bayonet and different time periods for 10000 pictures.The experimental results show that the proposed method can effectively identify whether the driver wears the seat belt according to the regulations,and the comprehensive recognition rate is significantly improved compared with the existing method.
Image Super-resolution Reconstruction Algorithm with Adaptive Sparse Representationand Non-local Self-similarity
ZHANG Fu-wang, YUAN Hui-juan
Computer Science. 2019, 46 (6A): 188-191. 
Abstract PDF(3756KB) ( 457 )   
References | RelatedCitation | Metrics
How to make full use of the information contained in the image for super-resolution reconstruction is still an open question.This paper proposed an image super-resolution reconstruction algorithm based on adaptive sparse representation and non-local self-similarity.In the process of training and reconstruction,the K-means algorithm is used to cluster the selected datasets,and similar image blocks are gathered together.Then PCA is used to process the adaptive selection dictionary for super-resolution reconstruction.Compared with image reconstruction through a fixed dictionary,the adaptive selection dictionary is used to reconstruct the image,and the effect of reconstructed image obtained will be more superior.The experimental results on natural images show that the super-resolution images reconstructed by the proposed algorithm are more detailed,with fewer artifacts and sharper edges.
UAV Fault Recognition Based on Semi-supervised Clustering
WANG Nan, SUN Shan-wu
Computer Science. 2019, 46 (6A): 192-195. 
Abstract PDF(1555KB) ( 431 )   
References | RelatedCitation | Metrics
Compared with manned vehicles,UAVs(Unmanned Aerial Vehicles) have many advantages,which make them widely used in military,civilian and scientific research fields.However,due to the lack of real-time decision-making ability,the UAV has high accident rate.Fault prediction is the core of UAV health management technology.Before building a fault prediction model,an important step is to identify the pattern of sampled data so as to add accurate labels to training data for modeling,which is also a part of improving flight portrait.Based on the UAV flight data accumulated in a big data platform of an UAV production company in Shenyang,this paper proposed a semi-supervised clustering technique to automatically identify the normal points of the flight process,the fault points(including the crashing points) and the points after crashing.At the same time,the management and statistics are strengthened,and the efficiency and accuracy of adding a precise label to the historical flights data are greatly improved.Real flight data or flight test data were used to verify the results.The results of manual verification show that the recognition rate of fault points can reach over 80%.
SAR Image Feature Retrieval Method Based on Deep Learning and Synchronic Matrix
PENG Jin-xi, SU Yuan-qi, XUE Xiao-rong
Computer Science. 2019, 46 (6A): 196-199. 
Abstract PDF(2174KB) ( 620 )   
References | RelatedCitation | Metrics
For the existence of speckle noise in Synthetic Aperture Radar (SAR) image,however,the traditional SAR image interpretation work is quite complicated.However,the image which is quality and visual effect obtained by the traditional SAR image retrieval method are not ideal of conception of it what is perfect most suitable.Therefore,the signal which is contained in the SAR image is not suitable.And the speckle distribution and texture-information are abundant in themselves.In order to improve the retrieval efficiency of SAR images,an image retrieval method is proposed that according to the visual features of the images,thereby improving the visual effect of the images and facilitating the artificial intuition to observe the images’ texture (cells) information; thus,using deep learning to take the advantages of fuzzy theory and neural network and to improve the performance of image processing.Firstly,according to the statistical characteristics of image pixel cells,according to the semantics of fuzzy neural network,an efficient image texture feature and Deep Learning semantic analysis method are proposed to classify and match the image texture style advantage.Se-condly,according to the semantic feature.The feature is shown that methods propose a retrieval of it.Firstly,the texture features of SAR images are extracted by Deep Learning Data demantic clustering,and then the SAR images are characterized according to the Synchronic Matrix method.Finally,the texture features of SAR images and the vector of filtered Gray-components are retrieved by Deep Learning method to perform Image Cells’classification.The experimental results show that the proposed method achieves preciser-results in SAR image retrieval,and the visual effects and analysis efficiency are better improved for analysis and application.Moreover,the method is effectivein suppressing speckle noise and visual effects on SAR image texture features.It’s an increasing strategy with the effects of SAR image analysis.
Self-adapting Regular Constraint Algorithm in Super-resolution of Single-frame Images
LI Hai-xue, LIN Hai-tao, CHEN Jin
Computer Science. 2019, 46 (6A): 200-204. 
Abstract PDF(2504KB) ( 434 )   
References | RelatedCitation | Metrics
As a typical undetermined problem,super-resolution of single-frame images needs to be constrained by regular terms in the process of optimization,so as to improve the stability of super-resolution reconstruction.As a regular term commonly used in super-resolution,smoothness regularities may lead to the loss of high frequency information in images,cause the blurring of marginal areas in images,and affect the visual effects of reconstructed images.Based on Markov random field(MRF),this paper built the model of local image,characterized the correlation between the pixels in the local image block and realized self-adapting regular constraint in the process of super-resolution,which can effectively avoid the blurring effect in the marginal areas and other positions in the images,and improve the performance of the image reconstruction.
Triangulation Reconstruction of Plantar Surface Based on Depth Feature
MENG Wen-quan, WU Li-sheng
Computer Science. 2019, 46 (6A): 205-207. 
Abstract PDF(2689KB) ( 326 )   
References | RelatedCitation | Metrics
3D reconstruction can be understood as the fitting of curves and surfaces,and the complex surfaces characteri-zed by triangulation is widely applied.This paper proposed a detail triangulation method based on depth feature of point cloud.The reconstruction method of foot point cloud data based on line laser scanning was introduced in detail.First,the strip data points are segmented by piecewise processing,the threshold is setted to supplement the leakage points and delet the useless segments.Then,the 8-adjacency domain method is used to find the boundary points and to sort the closed curves,and the band data points are filtered by Savitzky-golay.Finally,the basis is based on the method of filtering the banded data points.The topological relationship of two-dimensional mesh forms a quadrilateral network,and the detail triangulation method is used to segment the faces.Experiments show that the method of surface reconstruction is fast,and the boundary and internal details are obvious.
Low-contrast Crack Detection Method Based on Fractional Fourier Transform
ZHOU Li-jun, LIU Xiao
Computer Science. 2019, 46 (6A): 208-210. 
Abstract PDF(3139KB) ( 408 )   
References | RelatedCitation | Metrics
Due to the complexity of tunnel structure and environment,strong interference exists in the detection environment of tunnel cracks,such as concrete mud,dirt,water seepage area,etc..This results in low contrast between background and small cracks.Therefore,it is easy to miss cracks by using conventional morphological methods.In order to solve this problem,this paper proposed a crack detection method based on fractional Fourier transform.In this me-thod,the image is mapped to different time-frequency domains by different order fractional Fourier transform,which is helpful to extract the filth feature in the crack image.The background contrast of the image is balanced by compensating the filth region with the background information.The fractional differential method is used to enhance the image and the connected domain method is used to extract the cracks.Experimental results show that the proposed method can effectively remove the filth region and detect tunnel cracks with low contrast.
Outdoor Lighting Estimation Algorithm Based on White Balance Correction
FANG Jing, ZHANG Rui, CUI Wei, HAN Hui-jian
Computer Science. 2019, 46 (6A): 211-214. 
Abstract PDF(2444KB) ( 432 )   
References | RelatedCitation | Metrics
This paper proposed a fast algorithm for estimating outdoor illumination parameters without user interaction for outdoor scene images taken from the same sun position under different weather conditions.K-means algorithm is used to detect the shadow area to obtain the initial sky light parameters,and Grey-World algorithm is used to obtain the initial sun light parameters.Then the base image is solved,and the white balance correction is used to correct the base image,so as to iteratively optimize the illumination parameters more accurately.Experimental results show that the reconstructed image obtained by the proposed algorithm has less error than the reconstructed image obtained by the exi-sting algorithm.Compared with the existing algorithms,the proposed algorithm is faster,more convenient and more accurate.So it can be well applied to augmented reality.
Clothing Image Retrieval Method Combining Convolutional Neural Network Multi-layerFeature Fusion and K-Means Clustering
HOU Yuan-yuan, HE Ru-han, LI Min, CHEN Jia
Computer Science. 2019, 46 (6A): 215-221. 
Abstract PDF(2911KB) ( 458 )   
References | RelatedCitation | Metrics
The booming of clothing e-commerce has accumulated a large amount of clothing image data,and the “image search” of clothing images has become a hot research direction.Apparel images have rich overall semantic information and a large amount of detailed information,and achieving accurate retrieval is a challenging problem.Traditional me-thods of clothing image based on artificial semantic annotation and methods of image retrieval based on artificially designed content features such as color and texture have significant limitations.This paper proposed a clothing image retrieval method based on multi-layer feature fusion and K-Means clustering of convolutional neural networks,which makes full use of the effectiveness and hierarchy of deep convolutional neural network in image feature extraction,fuses the detailed information and abstract semantic information of different convolutional hierarchical featuresto improve retrieval accuracy,and uses K-Means to improve the retrieval speed.The proposed method firstly performs uniform size processing on the clothing image data set,then uses the convolutional neural network for training and feature extraction,extracts multi-level features of the clothing image from low to high,and then fuses various levels of features.Finally,the K-Means clustering method is used to efficiently retrieve large-scale image data.The experimental results on the DeepFashion sub-category data set Category and Attribute Prediction Benchmark and In-shop Clothes Retrieval Benchmark show that the proposed method can effectively enhance the feature expression ability of clothing images,and improve its retrieval accuracy and retrieval speed.The proposed method is supprior to other mainstream methods.
Image Compression Method Combining Canny Edge Detection and SPIHT
WANG Ya-ge, KANG Xiao-dong, GUO Jun, HONG Rui, LI Bo, ZHANG Xiu-fang
Computer Science. 2019, 46 (6A): 222-225. 
Abstract PDF(2756KB) ( 343 )   
References | RelatedCitation | Metrics
To solve the problem that the reconstructed images obtained by SPIHT algorithm will lose texture details this paper proposed an image compression algorithm combining Canny edge detection and SPIHT.First,Canny edge detection is performed for the image,the extracted edge map,and edge recomposition is obtained;Secondly,SPIHT algorithm is used to encode the image,the encoded code stream is enconded and decoded by using Huffma,anda reconstructed image is obtained after SPIHT algorithm decoding and wavelet inverse transformation.inally,the two reconstructed images are added to recover the original image.The results show that the PSNR value and information entropy of reconstructed images are improved at low bites per pixel,compared with SPIHT combined with Huffman encode algorithm,and the information amount of reconstructed images is increased.
Delaunay Triangular Mesh Optimization Algorithm
QING Wen-xing, CHEN Wei
Computer Science. 2019, 46 (6A): 226-229. 
Abstract PDF(2137KB) ( 540 )   
References | RelatedCitation | Metrics
After years of research and development in the petroleum field,some grid-related basic algorithms such as Delaunay triangulation algorithm have gradually matured.However,with the development of technology,the requirements of relevant algorithms and software in the industry are constantly improving,so the existing methods can no longer meet the actual needs.This paper studied and analyzed the characteristics and shortcomings of the conventional triangulation algorithm.It proposed a method by using the idea of point-by-point and divide-and-conquer to quickly gene-rate Delaunay triangle mesh,so that the number of layout points will not have a great impact on the efficiency of network construction.A large number of tests verify that the algorithm has greater advantage compared with traditional algorithms in terms of correctness,stability and efficiency.
Global Residual Recursive Network for Image Super-resolution
ZHANG Lei, HU Bo-wen, ZHANG Ning, WANG Mao-sen
Computer Science. 2019, 46 (6A): 230-233. 
Abstract PDF(2739KB) ( 586 )   
References | RelatedCitation | Metrics
The application of the deep network model has achieved great success in image super-resolution,and it has been proven that the reconstruction quality of low-resolution images reconstructed into high-resolution images is gene-rally higher than traditional algorithms.In order to further improve the reconstruction quality of image,a global residual recursive network was proposed .By optimizing the classical residual network,the global residual block feature fusion and the local residual block feature fusion are proposed,which allows the model to generate the idea of adaptive updating weights,and it improves information flow.In combination with the L1 cost function,the ADAM optimizer further improves training stability and trains the model through the DIV2K training set.Through the PSNR/SSIM image reconstruction index,the quality of picture reconstruction is obtained.In the SSIM index,the maximum value is 0.94,which is superior to 0.92 of the current latest deep learning model(EDSR).The global residual recursive network model effectively improves the image reconstruction quality,reduces straining time,effectively avoids gradient attenuation,and improves learning efficiency.
Improved Method for Blade Shape Simulation Based on Vein Shape Function
Computer Science. 2019, 46 (6A): 234-238. 
Abstract PDF(2572KB) ( 569 )   
References | RelatedCitation | Metrics
The shape of plant leaves is mostly flaky,which makes the two-dimensional shap of the leaves more attractive.However,the three-dimensional shapes of the leaves,such as bending and concave and convex,are also an important part of the blade morphology.Based on the simulation method proposed by Runions et al.for plant leaf morphology in a two-dimensional plane,a three-dimensional blade shape simulation method based on vein shape function was proposed.Firstly,the B-spline curve is used to specify the shape function in the third dimension for different grades of veins,and the veins with three-dimensional shape are obtained.The blade edge is then moved according to the vein structure to obtain the three-dimensional shape of the edge.Then the edge and the vein are sampled.The constellation Delaunay triangulation algorithm is used to construct the foliar triangle mesh,and the Loop subdivision algorithm is used to perform the mesh smooth subdivision processing to generate a smooth foliar mesh model.A plant leaf model with a three-dimensional morphology can then be obtained.Experiments show that this method can effectively generate three-dimensional plant leaf models of various forms,which can be used for morphological simulation of real plantlea-ves.
Light-weight Recognition Algorithm of Vehicle License Plate Characters
MA Li-xin, LI Feng-kun
Computer Science. 2019, 46 (6A): 239-241. 
Abstract PDF(1550KB) ( 468 )   
References | RelatedCitation | Metrics
Character recognition is the key step of vehicle license plate recognition (VLPR).Some concepts,such as shape feature vector (SFV),were proposed after examining the character set used by vehicle license plate and the feasibility of using SFV to recognize vehicle license plate characters was proven theoretically.Then,a character recognition algorithm of vehicle license plate was proposed and evaluated by simulation.The result of the simulation shows that SFVcan be used as license plate character recognition,and the license plate recognition algorithm based on SFV has an accuracy rate of 97.31%.Besides,this algorithm has no complex training process and does not require large amounts of data to record the training results.It is simple to implement and is a lightweight license plate character recognition algorithm.
Visualization of Wind Vectors Using Line Integral Convolution with Visual Perception
MA Ying-yi, LI Hong-ping, GUO Yi-feng
Computer Science. 2019, 46 (6A): 242-245. 
Abstract PDF(4154KB) ( 362 )   
References | RelatedCitation | Metrics
A great deal of efforts have gone into improving the algorithm and the display of the patterns in field of flow visualization during these years.However,in the process of algorithm implementation and quality assessment,theambiguity of flow field direction and the unclear description of vector field direction and size are often encountered.In order to solve this problem,the theory of human visual perception was used to evaluate the quality of visualization results and improve the patters of flow.To put this design into practice,according to the visual perception,a display using wind vectors with line integral convolution algorithm was presented.
Gesture Recognition Based on Hand Geometric Distribution Feature
HAN Xiao, ZHANG Jing, LI Yue-long
Computer Science. 2019, 46 (6A): 246-249. 
Abstract PDF(2208KB) ( 445 )   
References | RelatedCitation | Metrics
Aiming at the problem that gestures are affected by scaling and rotation,resulting in low recognition rate,this paper proposed a feature extraction method based on hand geometric distribution for gesture recognition.Firstly,the segmented gesture image is normalized.Secondly,width-to-length ratio of the minimum circumscribed rectangle of gesture main direction and gesture contour is calculated,and the similarity function is used as preliminary recognition to select some candidate gestures.Finally,contour segmentation method is used to estimate the distribution of gesture contour points in polar coordinates and the modified Hausdorff distance is used as a similarity measure method to identify the final gesture.The experimental results show that the proposed method can identify various gestures quickly and accurately,the average recognition rate reaches 92.89%,the false recognition rate is reduced to 3.53%,and the recognition speed is 4.2 times higher than that of similar algorithms.
Facial Expression Transfer Method Based on Deep Learning
LIU Jian, JIN Ze-qun
Computer Science. 2019, 46 (6A): 250-253. 
Abstract PDF(2107KB) ( 866 )   
References | RelatedCitation | Metrics
In order to solve the problems of low image quality,long training process and slow generation speed of face expression transfer,this paper proposed a facial expression transfermethod based on generative adversarial network to make expression transfer faster and more natural.Firstly,the facial features are extracted by using convolutional neural network,and the images are mapped from high-dimensional space to shallow space.In the shallow space,the facial expression features are discriminated by using the Generative Adversarial Networks.Then the nearest neighbors up-sampling and convolutional neural networks are used to mapthe image from the shallow space to the high-dimensional space,and in this process,the face expression is changed by adding the facial expression feature maps into neural networks.Compared with Fader Networks,the network model parameter amount of the proposed method is reduced by 43.7% and training time is reduced by 36%.The experimental results show that the proposed method can effectively improve the quality and the speed of generated images.
Vehicle Recognition Model Based on Multi-feature Combination inConvolutional Neural Network
LIU Ze-kang, SUN Hua-zhi, MA Chun-mei, JIANG Li-fen
Computer Science. 2019, 46 (6A): 254-258. 
Abstract PDF(2507KB) ( 541 )   
References | RelatedCitation | Metrics
Vehicle recognition plays an important role in intelligent transportation,which can be used in many fields such as illegal snapping,traffic jam warning,and automatic driving,etc.This paper proposed a joint model that combines vehicle edge(E-CNN) to identify vehicles.The simple and effective feature combining not only improves the recognition accuracy,but also accelerates the convergence speed of the model.In order to verify the performance of E-CNN,the multi-features combination model was compared with the model of VGG16 and GoogLetNet.The experimental results show that the convergence speed of the proposed model has obvious advantages compared with VGG16 and GoogLeNet.Further more,the recognition accuracy of the proposed model is up to 99.90%,which is higher than 99.82% of VGG16 and 99.35% of GoogLeNet.
Color Image Enhancement Algorithm Based on PCNN Internal Activities
XU Min-min, KOU Guang-jie, MA Yun-yan, YUE Jun, JIA Shi-xiang, ZHANG Zhi-wang
Computer Science. 2019, 46 (6A): 259-262. 
Abstract PDF(2487KB) ( 312 )   
References | RelatedCitation | Metrics
Pulse coupled neural network (PCNN) is a new neural network inspired by the working principle of mammalian visual nervous system which has biological characteristics,so it has great superiority in digital image processing.After analyzing the operating principle and studyingaction mechanism of PCNN,it is found that the internal activity of PCNN itself has obvious enhancement effect on the original image.Combining it with the brightness adjustment algorithm based on human vision,this paper proposed an improved color image enhancement algorithm.Compared with the current common image enhancement algorithms,the proposed algorithm has great effectiveness in both subjective and objective evaluation from the experimental results,and the algorithm's code is more concise and more efficient.
Filtering Algorithm Based on Gaussian-salt and Pepper Noise
ZHANG Xu-tao
Computer Science. 2019, 46 (6A): 263-265. 
Abstract PDF(2206KB) ( 326 )   
References | RelatedCitation | Metrics
The process of acquisition,transmission and storage makes image easier to be polluted with mixed noise,especially Gaussian-salt and pepper mixed noise.Considering the situation that conventional filtering algorithms are basically designed for some kind of noise with unsatisfactory suppression of mixed noise,this paper proposed a novel filtering algorithm based on Gaussian-salt and pepper noise.The experimental results reveal that the proposed algorithm outperforms the traditional algorithms in filtering out mixed noise under the comprehensive evalution of subjectivity and objectivity aspects.And it has certain reference value in filtering out mixed noise.
3D Retrieval Algorithm Based on Multi-feature
LI Yue-feng
Computer Science. 2019, 46 (6A): 266-269. 
Abstract PDF(2216KB) ( 318 )   
References | RelatedCitation | Metrics
In the 3D model retrieval method,the sampling process offset is generated in the model of complex local surface which may exist in the shape distribution feature extraction process.Aiming at this problem,the statistical feature of the model based on cosine value was proposed as another statistical feature,and the correlation weight was used.The feedback algorithm determines the weights to combine the two geometric features to perform the three-dimensional model feature representation,and finally uses the Euclidean distance for similarity matching.It has been verified by experiments that retrieving three-dimensional model by using this feature representation can improve the recall and precision rate.
Line Tracking and Matching Algorithm Based on Semi-direct Method in Image Sequence
ZHU Shi-xin, YANG Ze-min
Computer Science. 2019, 46 (6A): 270-273. 
Abstract PDF(3243KB) ( 460 )   
References | RelatedCitation | Metrics
Considering the small motion in image sequence,this paper proposed a line tracking and matching algorithm based on semi-direct method is proposed.Firstly,extracting and matching of feature point and line should be conducted in the key frames.Secondly,feature point of the line is reconstructed by using the method of structure from motion.Then,feature point tracking and relative pose estimation are calculated through inverse compositional image alignment algorithm.Finally,line matching result is obtained based on feature point tracking result.Two group of image sequence experiments were conducted to validate the proposed algorithm.The experiments results indicate that the proposed algorithm is capable of tracking and matching the lines in the image sequence,and can estimate the camera pose simultaneously.And camera pose error accumulates with the increase of image frame.A novel line matching algorithm in image sequence was proposed.The algorithm can achieve line tracking and matching and obtain camera track at the same time through sparse feature point of the line.However,the algorithm needs to be corrected because of accumulate error.
Face Recognition Using SPCA and HOG with Single Training Image Per Person
HAN Xu, CHEN Hai-yun, WANG Yi, XU Jin
Computer Science. 2019, 46 (6A): 274-278. 
Abstract PDF(5338KB) ( 283 )   
References | RelatedCitation | Metrics
Face recognition based on single sample is a challenging task.This paper combined the Similar Principal Component Analysis (SPCA) algorithm and Histograms of Oriented Gradients (HOG) algorithm,and used SPCA to screen out the similar information of the image class,and quantified the similar information blocks with HOG algorithm to make the two advantages complementary.Finally,we used Pearson correlation (PC) to identify similarity and conduct experiments on the Extended Yale B database.Experimental results show that the proposed algorithm has better recognition performance than traditional algorithm when the illumination of the face image changes.
Object Detection Algorithm Based on Context and Multi-scale Information Fusion
LV Pei-jian, CHEN Jia-peng, YUAN Fei, PENG Qiang, XIANG Yu
Computer Science. 2019, 46 (6A): 279-283. 
Abstract PDF(2105KB) ( 416 )   
References | RelatedCitation | Metrics
Recent advances in convolutional neural networks(CNNs) have led to significant improvement in object detection.To solve the problem of missing context and multi-scale information of SqueezeDet algorithm,this paper combines skip connection and shortcut connection to aggregate multi-scale feature maps,and use dilated convolution to expand the convolutional receptive field and context.A context-based multi-scale object detection model was proposed to effectively improve the accuracy and robustness of object detection for complex scenes.This model fuses three different resolution feature maps:the minimum and middle size feature maps gather context through dilated convolution,the minimum size feature maps are doubled through bilinear interpolation and the maximum size feature maps use convolution whose stride is 2 to down-sample.Then the three feature maps have the same size and can be fused.In addition,this paper uses shortcut connection to connect different size of feature maps to obtain lost information from the larger feature maps.The model is evaluated on the autopilot international benchmark dataset KITTI and achieves 6% improvement compare to the SqueezeDet.The speed of the model reach 30fps on a GPU.
Monitoring Video Fire Detection Algorithm Based on DynamicCharacteristics and Static Characteristics
XIAO Xiao, KONG Fan-zhi, LIU Jin-hua
Computer Science. 2019, 46 (6A): 284-286. 
Abstract PDF(3478KB) ( 644 )   
References | RelatedCitation | Metrics
Fire is one of the most common hazards to public safety and social development,and timely and accurate fire alarm is of great significance.Video based fire detection overcomes the shortcomings of traditional technology and adapts to the various environment well.Combined with intelligent detection algorithm,it can provide more intuitive and richer fire information.The static characteristics of the video images are analyzed,and the suspected flame images are obtained,and then the flame is further judged by the dynamic characteristics.The effectiveness of the algorithm is proved by MATLAB in this paper and it has a good application prospect.
Network & Communication
Validation of Synthetic Aperture Radar(SAR) Imaging Algorithm Based on Simulation
ZENG Le-tian, YANG Chun-hui, LI Qiang, CHEN Ping
Computer Science. 2019, 46 (6A): 287-290. 
Abstract PDF(3404KB) ( 618 )   
References | RelatedCitation | Metrics
Imaging algorithm is crucial to the performance of the synthetic aperture radar (SAR).Existing testing methodnot only needs to use real equipment,radar data and testing environment,but also lacks a reasonable evaluation for the imaging result,which greatly affect the efficiency and effectiveness of the software testing.To solve these problems,this paper presented a novel testing method based on simulation for the validation of SAR imaging algorithms.Firstly,the echo data are generated independently via improved concentric circle method,eliminating the real echo data constraint.Then,the correctness and feasibility of imaging algorithms are evaluated scientifically by quantitative indicators combined with point target imaging as well as distributed scene target imaging.The proposed method greatly improves the effectiveness of the testing work.Finally,the correctness and the effectiveness of the proposed method were verified by simulation experiments.
Study on SDN Network Load Balancing Based on IACO
ZHENG Ben-li, LI Yue-hui
Computer Science. 2019, 46 (6A): 291-294. 
Abstract PDF(1841KB) ( 485 )   
References | RelatedCitation | Metrics
The study on SDN network load balancing considering server processing performance is of great significance to reasonably allocate resources and improve service performance.Therefore,this paper studied on SDN load balancing based on improved ant colony algorithm.Firstly,the structure and load balance of SDN are analyzed.Then,according to the actual demand of SDN load balancing,the traditional ant colony algorithm is improved.The idle rate of each link bandwidth is taken as the pheromone of the ant colony algorithm,the performance of computer processor and the amount of data needed to be transmitted is taken as the enlightening information,and the traditional ant colony algorithm is improved by multiple heuristics.The convergence of the improved algorithm is also proved.Finally,perfor-mance verification simulation is performed for the improved algorithm.Simulation results verify that the proposed algorithm has the advantages of fast convergence speed and short time consuming.Simulation of SDN network load balancing also proves the validity and feasibility of this method.
Cloud Resource Scheduling Algorithm Based on Game Theory
XU Fei, WANG Shao-chang, YANG Wei-xia
Computer Science. 2019, 46 (6A): 295-299. 
Abstract PDF(2786KB) ( 594 )   
References | RelatedCitation | Metrics
In a large data center in a cloud environment,the number of virtual machines and the load of virtual machines change frequently with the needs of users and applications.The virtual machines need to make dynamic resource adjustments to remove hotspot resources in the system in time and implement load banlancing for the entire system.Now through theoretical research on cloud resource allocation,we have obtained such applications as First-Fit greedy algorithm and Round Robin polling algorithm that can be applied to some cloud systems to solve problems in a short time,but they have the problems of resource utilization and load.Therefore,this paper proposed a fuzzy-future-memory tradeoff (GMO) cloud resource scheduling algorithm based on game theory.The algorithm breaks a fixed number of resource allocation bottlenecks,takes QoS into consideration,and solves problems of resource utilization and resource allocation fairness.Simulation results show that FUTG algorithm can significantly improve the effectiveness of dynamic resource scheduling and the efficiency of resource usage under dynamic load.
Analysis of Characteristics and Applications of Chinese Aviation Complex Network Structure
CHEN Hang-yu, LI Hui-jia
Computer Science. 2019, 46 (6A): 300-304. 
Abstract PDF(2429KB) ( 531 )   
References | RelatedCitation | Metrics
With the continuous improvement of the economic and social value of air transport,as the carrier of air transport,the research and analysis of air transport network structure is of great significance.Based on the flight data of major airlines in China,this paper used complex network theory to analyze the network characteristics of China’s aviation network,and proved that China’s aviation network is a small-world network with scale-free characteristics.By analyzing the basic statistical characteristics of China Aviation Complex Network in 2015,we found that the average path length decreases,the average degree of nodes increases,and the clustering coefficient tends to be stable.After that,the paper analyzed the interaction of node index,edge index and weighted index of China Aviation Complex Network,and studied the influence of different index changes on network structure and its practical significance.In addition,the paper found that the degree-degree correlation,degree-weight correlation and betweenness-betweenness correlation,which reflect the connection preference and structural characteristics of China Airline Network,are negative.Finally,the application analysis and prospect of the research results were carried out.
Three-dimensional Geographic Opportunistic Routing Based on Energy Harvesting Wireless Sensor Networks
WANG Chen-yang, LIN Hui
Computer Science. 2019, 46 (6A): 305-308. 
Abstract PDF(1971KB) ( 310 )   
References | RelatedCitation | Metrics
Using energy harvesting technology,the nodes in wireless sensor networks can gain energy from the environment,and keep working for a long time with a small battery capacity.Considering the WSNs mostly deployed in three-dimensional space in practical applications,based on the study of traditional geographic routing protocols,this paper proposed a three-dimensional geographic opportunistic routing algorithm for energy harvesting wireless sensor networks.First,the algorithm divides the space into cubes,and chooses an appropriate cube as next forward region.The nodes in the region calculate the back off time according to the residual energy and delivery rate.The node with shortest back off time becomes the transmission node.The simulation result shows that this algorithm can improve the data delivery rate effectively,balance the energy consuming of the nodes,reduce the average packet delivery time and make the throughput better.
System Design of Space Information Network Architecture
YANG Liu, WANG Chuang, WANG Jun-yi
Computer Science. 2019, 46 (6A): 309-311. 
Abstract PDF(2906KB) ( 541 )   
References | RelatedCitation | Metrics
The space information network covering the satellite,the telecommunication platform and the ground network has become the development trend of the space communication network.The paper studied the characteristics of space information network,and presented the system design of a spatial information network architecture.This paper analyzed the cost and the coverage performance of various space communication platforms,and optimized the spatial information network architecture.On this basis,this paper proposed the design of space sectionwheve the space segment of the system takes GEO satellite as the backbone network,LEO or IGSO satellite as an enhanced network,and the telecommunication platform provides an emergency security network in the hot spot and emergency area.
SDN-based Network Controller Algorithm for Load Balancing
DOU Hao-ming, JIANG Hui, CHEN Si-guang
Computer Science. 2019, 46 (6A): 312-316. 
Abstract PDF(2434KB) ( 442 )   
References | RelatedCitation | Metrics
Currently,the emerging technologies of network show the booming development trend,which bring great convenience and fun for people’s life.However,they put forward the newer and higher requirements for efficient processing ofbig data withdesired security and reliability.On the one hand,the processing ability of the traditional network is difficult to meet these performance and security requirements;on the other hand,in order to obtain higher network benefits,researches of traffic scheduling optimization almost focuse on considering the link module factor,lacking the consideration of server module.In this paper,aiming at the shortcomings of current existing traffic scheduling optimization algorithms,an optimization algorithm called PSTS (Path-Server Traffic Scheduling) which introduces the additional consideration of server module was proposed.The PSTS algorithm is based on the SDN (Software Defined Network) paradigm and finished the modular function realization by using the Ryu controller.In the implementation process,by means of measuring the impact factors (performance metrics) of link and server levels and introducing impact factors which are obtained previously,the proposed algorithm realizes the sorting and filtering operations on each link and each server by calculating the weights.Meanwhile,the sorting and filtering results providestrong support for the final optimal traffic scheduling.The simulation results show that PSTS algorithm can achieve higher average bandwidth utilization and lower average transmission delay compared with DLB (Dynamic Load Balancing) algorithm when they have the same traffic load.At the same time,the proposed algorithm can effectively distribute the data stream more balanced to each serverwhen the network has a large number of data streams,which indicates that it can avoid the local congestion of network significantly,improve the processing speed of data stream,andfinally enhance the overall performance of the network.
Optimized Convex Localization Algorithm Using Multiple Communication Radius and Angle Correction
YE Juan, CHEN Yuan-yan, WANG Ming, NI Ying-bo
Computer Science. 2019, 46 (6A): 317-320. 
Abstract PDF(1880KB) ( 332 )   
References | RelatedCitation | Metrics
The convex localization algorithm is a range-free positioning algorithm in wireless sensor networks.In order to solve the problem of low positioning accuracy caused by the large overlap area and the irregularity of the region in the traditional convex localization algorithm,an improved localization algorithm was proposed,which uses the combination of multiple communication radius and RSSI to reduce the unknown node area and use the angle to correct the irregular area.The improved algorithm introduces multiple communication radius to broadcast multiple times to refine the area where the unknown node on the basis of the traditional convex algorithm,and then uses the RSSI to reduce the area,and finally obtains the polygon region using the angle correction as the positioning result.The simulation results show that the improved algorithm can effectively reduce the positioning error and improve the positioning accuracy compared with the original algorithm.
UWB Sparse Array Antenna Virtual Center Element Arrival Angle Estimation Method
YU Tao, GUO Wen-qiang, ZHU Xiao-zhang
Computer Science. 2019, 46 (6A): 321-324. 
Abstract PDF(2237KB) ( 634 )   
References | RelatedCitation | Metrics
In today’s Internet of Things society,location information is one of the basic technologies that connect everything.As a high-precision positioning signal,the ultra-wideband signal has the characteristics of high time resolution,strong penetrability,etc.,and is suitable for the positioning of people,materials,and vehicles in various environments,which has a broad application prospect.This paper proposed an angle-of-arrival estimation method based on an array receiving antenna,after obtaining the delay time of the transmission line between the receiving antenna elements.The timing time signal determined within the receiver is solved to estimate the angle of arrival ofthe virtual center element of the receiving array antenna.This method can estimate the angle of arrival within a 360-degree range between the array element and the transmitting antenna,overcomes the defect can the conventional AOA method can only estimate the angle of arrival within a range of 180 degrees.The results of MATLAB simulation show that this method can effectively estimate the arrival angle of the virtual center array element within 360 degrees,which has certain feasibility and practicality.
Design of Missile Networking Based on Weights and Average Connectivity
LIU Chun-ling, SHI Yu-xin, ZHANG Ran
Computer Science. 2019, 46 (6A): 325-328. 
Abstract PDF(2554KB) ( 277 )   
References | RelatedCitation | Metrics
In order to realize the integration of missile detection and combat,a high-speed,reliable and low-latency communication network between nodes is required to meet the characteristics of strong maneuverability of missiles,diversification of launch platforms,and distributed nodes.This paper designed a data link networking scheme based on communication weights and average connectivity to improve the communication efficiency of nodes,ensure the stability and resistance of the network by optimizing the selection of communication nodes and maximizing the communication coverage.The main missile manages communication network when the main missile can communicate with slave missiles.When the main missile cannot completely cover slave missiles,slave missiles select the center node to meet the requirement of the communication network by calculating the strength of the communication capacity of the computing node and the average of the nodes in the network topology.The design was modeled and simulated through the Truetime expansion toolbox in MATLAB.Compared with other networking algorithms,the feasibility and reliability of the proposed algorithm the scheme were verified.
Reliability-based Scheduling for Bit-flipping Decoding Algorithm of LDPC Codes
ZHANG Xuan, LI Xiao-qiang, YAN Sha
Computer Science. 2019, 46 (6A): 329-331. 
Abstract PDF(1870KB) ( 265 )   
References | RelatedCitation | Metrics
In the iterative decoding algorithm of LDPC codes,flood schedulingstrategy is adopted for message passing between variable nodes and check nodes.This paper proposes a bit-flipping decoding algorithm based on reliability scheduling.According to the soft information,the variable nodes are divided into reliable nodes and unreliable nodes,and prevent the transmission of unreliable nodes during iterative decoding.Simulation results show that the proposed algorithm achieves better BER performance than the bit-flipping decoding algorithm with lower complexity cost over the additive white Gaussian noise channel.
High Speed Joining Scheme Based on Channel Evaluation for IEEE 802.15.4e TSCH
XU Yong, ZHANG Xiao-rong, ZHU Yu-jun
Computer Science. 2019, 46 (6A): 332-335. 
Abstract PDF(2174KB) ( 340 )   
References | RelatedCitation | Metrics
Time Slotted Channel Hopping (TSCH) is one of the access behavior techniques defined in the IEEE 802.15.4e standard.In TSCH mode,new nodes that want to join the network must listen to EBs containing network information,but the IEEE 802.15.4e standard does not give a broadcast policy related to EBs,and the broadcast policy of EBs isa key technology for new node quickly joining the network.For the decisive technology,although the current academic community has proposed many solutions,it relies heavily on network density,and does not consider the actual interference.To this end,the paper mainly studied the network formation process of TSCH in severe interference environment,and proposed a fast access scheme based on channel estimation (HSJCE).By designing a new slot frame structure and evaluating the channel quality,The new join node and synchronizer mechanisms were used to increase the number of EBs sent and select the best quality channel to send and listen to EBs.Experimental data and analysis have fully shown that the new join and synchronization mechanism greatly increases the probability for joining nodes successfully detecting EBs and does not depend on network density.Even in the case of severe congestion,HSJCE can provide shorter join time.
Research on 3D Dynamic Clustering Routing Algorithm Based on Cooperative MIMO for UWSN
LIANG Ping-yuan, LI Jie, PENG Jiao, WANG Hui
Computer Science. 2019, 46 (6A): 336-342. 
Abstract PDF(3354KB) ( 310 )   
References | RelatedCitation | Metrics
In order to solve the problem of energy saving and energy balance in homogeneous underwater wireless sensor networks(UWSN) based on cooperative multi-input multi-output(MIMO),a multi-hop distributed UWSN three-dimensional system model was built in this paper.By introducing energy threshold and distance algorithm,the insufficiency of energy spatial distribution in the DCREDT selection algorithm was improved and an underwater dynamic clustering routing algorithm based on Energy and Distance with Thresholds(UDCREDT) was proposed.At the same time,the influence of energy balance on the service life of the network were quantitatively analyzed and the threshold value method was determined.Finally,the reasonableness and validity of the new UDCREDT algorithm were verified by simulation analysis.And compared with the DCREDT selection algorithm,the energy consumption is reduced by about 6.81%,and the balance is improved by about 7.98%,which effectively prolongs the service life of the network.
Information Security
NFV Based Detection Method Against Double LSAs Attack on OSPF Protocol
LI Peng-fei, CHEN Ming, DENG Li, QIAN Hong-yan
Computer Science. 2019, 46 (6A): 343-347. 
Abstract PDF(2727KB) ( 357 )   
References | RelatedCitation | Metrics
The OSPF protocol is one of the most widely used and successful interior gateway routing protocols in the Internet.Although there have been lots of investigations on the security of the OSPF protocol,there is still a lack of effective detection methods against the route spoofing attacks,so it is difficult to ensure the security of the OSPF routing in networks.By studying the principle of the double link state advertisements (LSAs) attack on the OSPF protocol,this paper presented three necessary conditions that are used to detect the attack,and proposed a detection method against the double LSAs attack on the OSPF protocol.Then,a corresponding detection middle box and analysis server used to detect attacks and clear up their routing pollution were designed and implemented based on the network function virtualization (NFV) technology.The detection middle box is responsible for capturing relevant OSPF packets from various links,sending the trace records to the analysis server,and receiving instructions from the analysis server to restore the polluted routes.The analysis server invokes the detection algorithm to analyze and process the trace record stream,and an alarm is given and an instruction is sent to the detection middle box to restore the contaminated routes if an attack is detected.The experimental results of the prototype show that the proposed method can detect the OSPF double LSAs attack in both IP networks or NFV networks accurately and efficiently,and the prototype has excellent characteristics such as high cost performance and easy to deploy.
Analysis Research of Software Requirement Safety Based on Neural Network and NLP
SUN Bao-hua, HU Nan, LI Dong-yang
Computer Science. 2019, 46 (6A): 348-352. 
Abstract PDF(1716KB) ( 525 )   
References | RelatedCitation | Metrics
To identify the incompleteness and ambiguity of software requirements and build a bridge between software requirements and standard specifications,this paper proposed a model of analysis and evaluation based on the Natural Language Processing (NLP) and neural network.Firstly,from ISO,the open-source Web application security plan (OWASP) and the PCI directory,multiple security specification features are identified,and text implication relationships are found.Then,the implication results and text annotations are used to train the neural network model to predict whether a certain statement in the document is available.The proposed model evaluates the performance of each implication configuration.The results show that the average F- score of the implicative configuration 9 is the highest,which is the best completeness predictor.Moreover,the performance of the proposed model is better than that of the null model under optimal and worst allocation.
JPEG Image File Header Forensics
XING Wen-bo, DU Zhi-chun
Computer Science. 2019, 46 (6A): 353-357. 
Abstract PDF(4237KB) ( 703 )   
References | RelatedCitation | Metrics
Objective extracting image data from JPEG image file header,get the image file header information,and then judge the authenticity of the image file.Method extracting JPEG format image data through WinHex software,parsing the acquired data,get the information of the JPEG image file header.Result:extract the Exif information,GPS information,thumbnail information,quantization table from the JPEG format image header.Conclusion:by judging whether the image information obtained from the JPEG image file header is consistent with the JPEG image header information obtained by the image acquisition tool,it is possible to determine the authenticity of the JPEG format image and whether the image has been processed by image processing software.
Improved Efficient Proxy Blind Signature Scheme
WANG Xing-wei, HOU Shu-hui
Computer Science. 2019, 46 (6A): 358-361. 
Abstract PDF(1558KB) ( 405 )   
References | RelatedCitation | Metrics
Through the analysis of the certificateless proxy blind signature scheme,we found that the efficiency of the scheme is not high.Besides,although this scheme has been proved to be able to resist malicious but passive attacks of bad KGC,there is no such KGC that can be fully trusted in the real world.Based on ECDLP problem and Bilinear Pairing,this paper presented an improved efficient proxy blind signature scheme without KGC,and demonstrates the correctness and security of the scheme.
Encryption of Wireless Sensor Networks Based on Chaos and WEP
LU Zheng-qiao
Computer Science. 2019, 46 (6A): 362-364. 
Abstract PDF(2030KB) ( 350 )   
References | RelatedCitation | Metrics
Wireless sensor network is an open system which is easy to be monitored and disturbed.The CPU of sensor nodes has limited computing speed,word length and storage space.It is impossible to execute the encryption algorithm with high running cost like PC level.The encryption algorithm of wireless network such as WEP and THIP with low running cost has been proved to have the problem of key strength and so on.This paper proposes an encryption protocol based on the combination of WEP and chaotic sequence.By using chaotic sequence as a sub-key generation algorithm,the key randomness can be improved by using chaotic sequence mapping without increasing the time complexity and space complexity,avoiding the key duplication problem of WEP protocol and increasing the difficulty of deciphering.
Attack Prediction Method Based on Multi-step Attack Scenario
HU Qian
Computer Science. 2019, 46 (6A): 365-369. 
Abstract PDF(3501KB) ( 921 )   
References | RelatedCitation | Metrics
Multi-step attack is a complement to intrusion detection,which can prevent,reduce or interrupt security threats to a certain extent.In order to prevent,reduce or interrupt security threats,this paper proposed an attack prediction method based on multi-step attack scenario.This method uses the bayesian network model to describe attack scene graph,builds the causal bayesian attack scene graphby data-mining the multi-step attack between the causal association rule.Based on the network structure,through attacking evidence,it calculates the probability of unknown attack,and predicts the next attack and attacker’s next attack intention.Finally,the experiment verifies that the proposed method can accurately predict the next attack and attacker’s attack intention.
Ownership Transfer Protocol for Multi-owners Internal Weight Changes with Trusted Third Party
GAN Yong, WANG Kai, HE Lei
Computer Science. 2019, 46 (6A): 370-374. 
Abstract PDF(1750KB) ( 320 )   
References | RelatedCitation | Metrics
In practical application,the ownership of multi-owner RFID tags transfers due to changes not only in owners of the tags,but also in the proportion of weights possessed by each owner.Therefore,in this paper,a tag ownership protocol for multi-owner internal weight changes with trusted third parties (TTP) was put forward to resolve this problem.As there is a trusted third party involved in the conversion of ownership,the original owner completely transfers the tag’s ownership to the new owner after the weight changes,which means original owners in the protocol are irrelevant.Lagrange interpolating polynomial and Shamir’s threshold secret sharing scheme are used in the protocol,and security analysis is conducted with GNY logic.The results show that the protocol can resist all kinds of attacks in the process of conversion.Meanwhile,the results of simulation experiments indicate the time consumption of tags and the amount of calculation fall within an acceptable range.
Modeling and Stability Analysis for SIRS Model with Network Topology Changes
LIU Xiao-dong, WEI Hai-ping, CAO Yu
Computer Science. 2019, 46 (6A): 375-379. 
Abstract PDF(2006KB) ( 807 )   
References | RelatedCitation | Metrics
This paper proposed an improved model to tackle the problem that the network topology changes is not considered in the classic SIRS (Susceptible-infected-recovered-susceptible) model.The threshold and the correlation between the topology and transmission process are deduced by Lyapunov stability theory.In the spread process of virus,computer virus will disappear ultimately when the system meets the threshold condition,which proves that there exists an equilibrium point of local virus when the system does not meet the threshold condition,and from which the limiting conditions for stability of the equilibrium point is also reached.Simulated experiment results indicate that the theoretical conclusions are valid and the SIRS model with network topology changes can simulate the spread process of actual computer virus better than the existing SIRS model.
Digital Image Forensics for Copy and Paste Tampering
XING Wen-bo, DU Zhi-chun
Computer Science. 2019, 46 (6A): 380-384. 
Abstract PDF(4825KB) ( 660 )   
References | RelatedCitation | Metrics
When a digital image is tampered with copy and paste,the copied part might be zoomed,rotated and then pasted in different parts of the image.The sift key points are detected in the image by SIFT algorithm,and the matched key points are found out by matching vector of the key points.The matched key points are classified by the affine transformation matrix which is calculated byRANdom SAmple Consensus,namely randomly extracting three matched point pairs in matched point pairs.The affine transformation of the image and its inverse affine transformation are carried out.Then the local correlation map of the tampered image and its affine transformation image is obtained,and the local correlation map of the tampered image and its inverse affine transformation image too.Each class of matched key points is divided into two groups according to affine transformation relations.A binary image is set from the key point of each group and dilated with structural elements.The dilated position of the binary image is continued to have when its value in the local correlation map is greater than the threshold and is get rid of when its value in the local correlation map is less than the threshold.The binary image is dilated iteratively until it’s no longer expands and its boundary is marked on the original image.Experiments show that this method can effectively locate the copy paste tampering areas in digital images.
Big Data & Data Mining
Citywide Crowd Flows Prediction Based on Spatio-Temporal Recurrent Convolutional Networks
GUO Sheng-nan, LIN You-fang, JIN Wen-wei, WAN Huai-yu
Computer Science. 2019, 46 (6A): 385-391. 
Abstract PDF(4027KB) ( 766 )   
References | RelatedCitation | Metrics
Accurately forecasting the crowd flows in urban areas can provide effective decision-making support for traffic management and citizens’ travel.The crowd flows in each urban region have strong correlations in both temporal dimensionsand spatial dimensions.These complex factors bring great challenges to accurate predictions.A novel neural network structure named attention-based spatio-temporal recurrent convolution networks (ASTRCNs) was proposed,which can simultaneously model various factors that affect the crowd flows.ASTRCNs consists of three components,which can respectively capture the short-term dependences,the daily periodicity influence and the weekly patterns of the crowd flows.Experimental results on a real data set of crowd flows in Beijing demonstrate that the proposed ASTRCNs outperforms the classical time series methods and the existing deep-learning based prediction methods.
Fault Prediction of Power Metering Equipment Based on GBDT
LIU Jin-shuo, LIU Bi-wei, ZHANG Mi, LIU Qing
Computer Science. 2019, 46 (6A): 392-396. 
Abstract PDF(1785KB) ( 515 )   
References | RelatedCitation | Metrics
The fault risk prediction of power metering equipment can reduce the loss caused by the fault risk of the national grid.Firstly,the data preprocessing and feature selection are carried out.Secondly,the GBDT-based fault categories,fault subclasses and equipment life cycle prediction are designed.Finally,the validity and advancement of the designed model are verified.Data used in the experiment are provided by China Electric Power Research Institute.The experimental results show that the prediction accuracy of the six fault types by using the proposed algorithm is 90.56%,the recall rate is 92.95%,and the F1 value is 91.71%.Compared with regression,BP neural network,Adaboost and decision tree algorithm,the gradient lifting decision tree algorithm has the best performance under parameter tuning conditions.
MetaStruct-CF:A Meta Structure Based Collaborative Filtering Algorithm in Heterogeneous Information Networks
Computer Science. 2019, 46 (6A): 397-401. 
Abstract PDF(1929KB) ( 559 )   
References | RelatedCitation | Metrics
In recent years,heterogeneous information networks (HINs) have received a lot of attention as they contain rich semantic information.Previous works have demonstrated that the rich relationship information in HINs can effectively improve the recommendation performance.As an important tool for mining relationship information in HINs,meta-path has been widely used in many algorithms.However,because of its simple linear structure,meta-path may not be able to express complex relationship information.To address this issue,this paper proposed a new recommendation algorithm,Metastruct-CF,which applies Meta structure to capture the accurate relationship information among data objects.Different from existing methods,the proposed combines algorithm multiple relationships to effectively utilize the information in HINs.Extensive experiments on two real world datasets show that this algorithm achieves better recommendation performance than several popular or state-of-the-art methods.
Distributed Spatial Keyword Query Processing Algorithm with Relational Attributes
XU Zhe, LIU Liang, QIN Xiao-lin, QIN Wei-meng
Computer Science. 2019, 46 (6A): 402-406. 
Abstract PDF(2242KB) ( 391 )   
References | RelatedCitation | Metrics
The rapid growth of the mobile internet and the internet of things generates a large amount data of spatial text object with relational attributes.Search engines for webpage text data can efficiently store and index textual data,but only support textual keyword queries.However mixed data including geographic location information,textual information,and relational attributes cannot be processed.Existing query-processing techniques for space-oriented keywords do not consider relation attributes as filter conditions.And those techniques are based on stand-alone implementation and cannot meet query performance requirements.In order to solve the above problems,this paper proposed a novel Baseline algorithm named BADKLRQ (Baseline Algorithm of Distributed Keywords and Location-aware with Relational Attributes Query) that maps attributes of relation attributes,space,and keywords into text data.The row text index indexes the converted text data.For query requests with relation attributes,space,and keywords,the query request is also converted into a plurality of text keywords in the mapping space,and the converted text data is queried.And an improved algorithm based on Baseline algorithm MGDKLRQ is proposed to improve the algorithm of converting spatial attributes into text keywords.Experiments show that the BADKLRQ algorithm improves by 10% to 15% and MGDKLRQ algorithm improves by 20% to 30% over the existing algorithm in terms of index time and query time.
Linear Twin Support Vector Machine Based on Data Distribution Characteristics
SONG Rui-yang, MENG Hua, LONG Zhi-guo
Computer Science. 2019, 46 (6A): 407-411. 
Abstract PDF(2885KB) ( 529 )   
References | RelatedCitation | Metrics
Twin Support Vector Machine(TWSVM) have been successfully applied in many fields.However,the standard TWSVM model have poor robustness when dealing with data classification problems involving distribution characteristics,especially when uncertainty in data fluctuates wildly,the standard classification model,which doesn’t consider the distribution characteristics,is no longer satisfactory for classification accuracy.Therefore,a weighted linear twin support vector machine model based on data distribution characteristics was proposed in this paper.The new model,denoted by TWSVM-U,further considers the influence of data distribution characteristics on the locations of classification hyperplanes,and constructs distance weights quantitatively according to data dispersity at the normal vector directions of classification hyperplanes.TWSVM-U is a generalization of TWSVM.In fact,when training samples do not have distribution characteristics,TWSVM-U model will degenerate to the standard TWSVM model.Experiments with 10-fold cross validation show that the TWSVM-U model performs better than the SVM and the TWSVM on classification problems with large data fluctuation range.
Common Issues and Case Analysis of System Data Migration
LU Ye-shan
Computer Science. 2019, 46 (6A): 412-416. 
Abstract PDF(2615KB) ( 523 )   
References | RelatedCitation | Metrics
With the development of society and the rapid change of technical framework,it has become a trend for the daily system to replace the old system with the new one.The replacement of the old system with the new system will inevitably involve the data docking between the old system and the new system.In the system construction of an organization in a city,the project needs to migrate all business data of the old system to the new system.Due to the inconsistency of table space,table structure and table field between the old and new systems,in order to ensure the consistency and integrity of data,ensure that data before and after migration is not missing,and ensure that dirty data does not migrate to affect the operation of the new system,how to migrate data between the old and new systems has become a top priority in the project.In order to solve the problem of data migration,this paper designs a data migration process based on ETL tools,and obtains a complete data migration process line through combination and series connection,thus realizing data migration to complete data docking between old and new systems.This paper elaborates the following problems and solutions for data migration:1)Common errors and solutions in data flow.2)Data migration problems and solutions with inconsistent data types.3)Inconsistent length of field in target database of data migration and solutions.4)How to re-change the original data when data migration is completed Problems and solutions of adjusting migration measurement.Based on this,this paper makes a brief analysis and summary of the problems in the process of data migration and the countermeasures to solve these problems.
Temporal Text Data Stream Feature Trend Model and Algorithm
MENG Zhi-qing, XU Wei-wei
Computer Science. 2019, 46 (6A): 417-422. 
Abstract PDF(1946KB) ( 818 )   
References | RelatedCitation | Metrics
Today,on the platform of e-commerce and social networking,there will be a lot of text data streams.It is very important to extract the characteristics of text data flow quickly to find some trend for guiding the operation of enterprises.For example,clothing enterprises must perceive popular information as quickly and accurately as possible.Fashion trends are of vital importance to the design,production and operation.Taken the text data flow of online goods as the research object,combining the online sales text real-time data flow,this paper defined a characteristic trend model of the temporal text data flow.Then,it proposed a real-time mining algorithm for text data stream feature trend finding.The algorithm was applied on the description of clothing sales text to extract popular feature applications.It can obtain an effective fashion trend and provide decision support for enterprises to formulate production plans and select marketing strategies.On the real sales data of the e-commerce platform,the experiment results prove that the algorithm has good accuracy and fast speed.Therefore,the proposed algorithm has important theoretical and practical significance.
Linear Discriminant Analysis of High-dimensional Data Using Random Matrix Theory
LIU Peng, YE Bin
Computer Science. 2019, 46 (6A): 423-426. 
Abstract PDF(1576KB) ( 672 )   
References | RelatedCitation | Metrics
Linear discriminant analysis (LDA) is an important theoretical and analytic tool for many machine learning and data mining tasks.As a parametric classification method,it performs well in many applications.However,LDA is impractical for high-dimensional data sets which are now routinely generated everywhere in modern society.A primary reason for the inefficiency of LDA for high-dimensional data is that the sample covariance matrix is no longer a good estimator of the population covariance matrix when the dimension of feature vector is close to or even larger than the sample size.Therefore,this paper proposed a high-dimensional data classifier regularization method based on random matrix theory.Firstly,a truly consistent estimation was conducted for high-dimensional covariance matrix through rotation invariance estimation and eigenvalue interception.Secondely,the estimated high-dimensional covariance matrix was used to calculate the discrimination function value.Numerical experiments on the artificial datasets,as well as some real world datasets such as the microarray datasets,demonstrate that the proposed discriminant analysis method has wider applications and yields higher accuracies than existing competitors.
Educational Administration Data Mining of Association Rules Based on Domain Association Redundancy
LU Xin-yun, WANG Xing-fen
Computer Science. 2019, 46 (6A): 427-430. 
Abstract PDF(2383KB) ( 296 )   
References | RelatedCitation | Metrics
Due to the periodicity of teaching and the change of teaching environment,the data of educational administration in colleges and universities have the characteristics of time series,and there are many association redundancy,so it is difficult to find out the efficient and interesting association rules.Although the sequential pattern mining algorithm can mine the time series frequent itemsets,it can not eliminate the association redundancy in educational administration data,and the utility and novelty of mining results can not meet the requirements.Therefore,this paper proposed a FUI_DK association rule mining algorithm based on association redundancy in the educational field.FUI_DK algorithm generates frequent candidate itemsets based on sequential pattern mining algorithm,and increases utility and interest to obtain high utility interesting itemsets based on the support,confidence of classical association rule algorithms,and the association rules satisfying the conditions are sorted out according to their support,confidence and utility.Finally,the result of association rules with high utility and high interest is obtained.The experiment contrast and mining result analysis are carried out on the data of a university student educational administration.The experimental results show that the FUI_DK algorithm has better time performance in the data mining of university educational administration,and the elimination rate of known association rules in the field can reach 43%,which can help colleges and universities to carry out time-saving and effective educational data mining.
Research on Population Prediction Based on Grey Prediction and Radial Basis Function Network
XU Li-li, LI Hong, LI Jin
Computer Science. 2019, 46 (6A): 431-435. 
Abstract PDF(1624KB) ( 569 )   
References | RelatedCitation | Metrics
For the problem of economic growth and social stability,it is extremely important to accurately predict the population.Therefore,this paper used the total population of Shandong Province over the years to construct a gray prediction model and a radial basis network model,respectively,to simulate the total population of 20 years from 1995 to 2014.And for the limitation of the single model,this paper also used the standard deviation method to redistribute the weights of its forecast results,and built a combination model on the basis of it.The results show that the accuracy of the combined forecasting model is higher than that of the grey model and the radial basis network model,and a short-term forecast of the total population between 2015 and 2025 is made by using the combined forecasting model.
Vertical Analysis Based on Fault Data of Running Smart Meter
LIU Zi-yi, LIU Qing, WANG Chong, WANG Ji-meng, WANG Yue, LIU Jin-shuo, YIN Ze-hao
Computer Science. 2019, 46 (6A): 436-438. 
Abstract PDF(1532KB) ( 332 )   
References | RelatedCitation | Metrics
As the main tool of electricity measurement and economic settlement,the failure rate of smart meter is directly related to the national economy and livelihood of the masses.This paper devised a vertical analysis model of fault data of running smart meter.The model can analyze the operation failure rate data of smart meters from different manufacturers and batches.The model firstly cleans the useless data,then carries out linear regression analysis on the basic data items,and gets the fault data and changing rate of the failure rate of each batch,which are utilized to do the cluster to evaluate the stability of the factory quality.The method and the result of the model can assess the quality of the batch of the smart meter,and can be beneficial to estimate the quality of factory.
Research on Naive Bayes Ensemble Method Based on Kmeans++ Clustering
ZHONG Xi, SUN Xiang-e
Computer Science. 2019, 46 (6A): 439-441. 
Abstract PDF(1645KB) ( 434 )   
References | RelatedCitation | Metrics
Naive Bayes is widely applied because of its simple method,high computation efficiency,high accuracy and solid the oretical foundation.Since the difference is a key condition of ensemble learning,this paper studied the method for improving the ensemble difference of naive Bayes classifier based on kmeans++ clustering technology,so as to improve the generalization performance of naive Bayes.Firstly,plurality of naive Bayesian classifier models are trained through a training sample set.In order to increase the difference between the base classifiers,Kmeans++ algorithm is used to cluster the prediction results of the base classifiers on the verification set.Finally,the base classifier with the best generalization performance is selected from each cluster for ensemble learning,and the final result is obtained by simple voting method.UCI standard data sets are used to verify the algorithm at the end of this paper,and its generalization performance has been greatly improved.
Persona Based Social User Modeling Using KD-Tree
WAN Jia-shan, CHEN Lei, WU Jin-hua, GAO Chao
Computer Science. 2019, 46 (6A): 442-445. 
Abstract PDF(1929KB) ( 1458 )   
References | RelatedCitation | Metrics
Traditional information push service takes little consideration of specific needs of social network users in particular conditions,hence it has poorly-targeted recommendations and low-rated system transformation.Responding to these problems,this paper proposed an intelligent push method based on user personas.By analyzing user data of intelligent learning platforms KNN clustering algorithm realized by KD-Tree is used to analyze user preferences and behavior characteristics,and then classifies user categories.First,through clustering center analysis,each type of users is abstracted into a highly-refined short text to form a representative label.Second,on account of label weight value of individual users and different service demands,user personas are modeled two times for refinement.Finally,recommendations are made by collaborative filtering algorithm.User personas will enhance the usability and value of user data.In addition,they may free analysts from large volumes of data,and help make fine classifications and thus more accurate recommendations.
Research on Sales Forecast of Prophet-LSTM Combination Model
GE Na, SUN Lian-ying, SHI Xiao-da, ZHAO Ping
Computer Science. 2019, 46 (6A): 446-451. 
Abstract PDF(2982KB) ( 1276 )   
References | RelatedCitation | Metrics
Predicting the short-term or long-term changes in the sales volume of a certain product has an important reference value for enterprises to formulate marketing strategies and optimize industrial layout.After deeply analyzing the characteristics of the Prophet additive model and the LSTM neural network,this paper built a Prophet-LSTM combinatorial model for forecasting sales based on the time-series data of a company's product sales.This paper designed and implemented comparison experiments with pre-combination Prophet,LSTM single-item model,and two typical time series prediction models.Experimental results show that the Prophet-LSTM combination forecasting model has stronger applicability and higher accuracy in the time series analysis of sales volume,which provides an important scientific basis for the company to respond to changes in market demand.
Clustering Method Based on Hypergraph Morkov Relaxation
GUO Peng, LI Ren-fa, HU Hui
Computer Science. 2019, 46 (6A): 452-456. 
Abstract PDF(2439KB) ( 345 )   
References | RelatedCitation | Metrics
How to embed high dimention spatial-temporal feature into low dimention semantic feature word bag is a typic clustering problem in the Internet of vehicle .Spectral clustering algorithm is recently focused because of its simple computing and global optimal solution,however,the research about the numbers of clusters is relatively little.Tranditional eigengap heuristic method works well if the clusters in the data are very well pronounced.However,the more noisy or overlapping the clusters are,the less effective this heuristic is.This paper proposed a clustering method based on hypergraph markov relaxation (HS-MR method).The basic idea of this algorithm is using the Markov process to formally describe hypergraph and start random walk.In the relaxation process of hypergraph Markov chain,meaningful geometric distribution of data set is found through tth power of random transfer matrix P and diffusion mapping.Then,the objective function based on mutual information is proposed to automatically converge the clustering number.Finally,the experimental results show that the algorithm is superior to simple graph spectral clustering algorithm and hypergraph spectral clustering algorithm in accuracy rate.
Density Peak Clustering Algorithm Based on Grid Data Center
LI Xiao-guang, SHAO Chao
Computer Science. 2019, 46 (6A): 457-460. 
Abstract PDF(3054KB) ( 381 )   
References | RelatedCitation | Metrics
A density peak clustering algorithm based on the grid data center was proposed.The computational complexity of the clustering process is reduced by meshing the dataset.Firstly,the dataset space is divided into grids with the same size,the density value of each grid is composed of the number of data objects that are contained in the grid and the decayed number of the data objects in its adjacent grids,and the distance value of each grid is defined as the nearest distance from its data center to the data center of another grid which has a higher density.Then,the cluster center grids are found since these grids always have high density value and large distance value.Finally,a density-based division approach is used to complete the duty of clustering.The simulation experiments performed on UCI artificial data set show that this algorithm can effectively cluster large-scale data with high clustering accuracy in a short period of time.
Personalized Learning Resource Recommendation Method Based on Three-dimensionalFeature Cooperative Domination
LI Hao-jun, ZHANG Zheng, ZHANG Peng-wei
Computer Science. 2019, 46 (6A): 461-467. 
Abstract PDF(2671KB) ( 296 )   
References | RelatedCitation | Metrics
Personalized recommendation is becoming an important form of information service era,and it is an effective way to alleviate knowledge disorientation and improve learning efficiency.In order to meeting learners’ personalized needs for online learning resources,personalized recommendation technology is increasingly important.Therefore,this paper proposed a personalized learning resource recommendation method based on three-dimensional feature cooperative domination (TPLRM).Firstly,a personalized learning resource recommendation model based on three-dimensional feature cooperative domination is constructed,resource recommendation feature parameters are improved,and fitness function is built.Secondly,the binary particle swarm optimization algorithm based on fuzzy control of Gauss’s membership function (FCBPSO) is used to solve the model.Finally,the evaluation target system is established.Five groups of comparative experiments verifies that TPLRM recommendation method has better recommendation performance.
Hybrid Recommendation Algorithm Based on SVD Filling
LIU Qing-qing, LUO Yong-long, WANG Yi-fei, ZHENG Xiao-yao, CHEN Wen
Computer Science. 2019, 46 (6A): 468-472. 
Abstract PDF(1811KB) ( 348 )   
References | RelatedCitation | Metrics
With the development of Internet technology,the issue of information overload is becoming increasingly se-rious.The recommendation system is an effective means to alleviate this problem.Focusing on the problem of low recommendation efficiency caused by sparse data and cold start in collaborative filtering,this paper proposed a hybrid recommendation algorithm based on SVD filling.Firstly,Singular Value Decomposition technique is used to decompose the user-item score matrix,and sparse matrix is filled by stochastic gradient descent method.Secondly,time weights are added to optimize the user similarity in the user matrix.At the same time,Jaccard coefficients are added to optimize the item similarity in the item matrix.Then,item-based and user-based collaborative filtering are combined to calculate prediction scores and select the optimal project.Finally,the proposed algorithm is compared with other existing algorithms on Movielens and Jester data set,and the result of experiments verifies that the effectiveness of the proposed algorithm.
Online Learning Nonnegative Matrix Factorization
HE Xiao-wen, HU Yi-fei, WANG Hai-ping, CHEN Mo
Computer Science. 2019, 46 (6A): 473-477. 
Abstract PDF(2489KB) ( 411 )   
References | RelatedCitation | Metrics
This paper proposed a new nonnegative matrix factorization of online form,namely online learning nonnegative matrix factorization(OLNMF).The OLNMF algorithm uses incremental forms of non-smooth model,and adopts “anmesic average method” to control the weight of new and old samples,improving the computational efficiency and reducing the computational complex.OLNMF algorithm can deal with large real-time update data sets,and extract more sparse base matrix.Compared with INMF,ONMFO,Lp-INMF,experiments on face databases show that the proposed method achieves better sparsity,andSVM classification method base on OLNMF achieves better classification accuracy on EEG database.
Method of Short Text Classification Based on Frequent Item Feature Extension
JIN Yi-fan, FU Ying-xun, MA Li
Computer Science. 2019, 46 (6A): 478-481. 
Abstract PDF(1661KB) ( 459 )   
References | RelatedCitation | Metrics
Short text has the characteristics of high feature dimension and sparse,as a result,the traditional classification method is not effective in short text classification.To solve this problem,a short text classification method based on frequent item feature extension called STCFIFE was proposed.First of all,frequent itemsets in the background corpus are mined through FP-growth algorithm,and combining the contextual association feature,the extended feature weight is calculated.Then the new features are added to the feature space of the original short text.On this basis,SVM (Support Vector Machine) classifier is trained for classification.The experimental results show that,compared with the traditional SVM algorithm and the LDA+KNN algorithm,STCFIFE can effectively alleviate problems of feature deficiency and high dimensional sparsity in short text and improves F1 value by 2%~10%,improving the classification effect in short text.
Boundary Distance Algorithm for Determining Sliding Window Size
PENG Cheng, HE Jing, CHI Hao
Computer Science. 2019, 46 (6A): 482-487. 
Abstract PDF(3300KB) ( 426 )   
References | RelatedCitation | Metrics
Due to a large amount of information and high density of the original measurement data collected by most equipment,the existing time series sliding window dimension reduction method uses the empirical value to determine the window size,which cannot retain important information points of the data to the utmost extent,and has high computational complexity.To this end,the influence of sliding window on time series similarity technology in practical applications was discussed,and an algorithm for determining the initial scale of sliding window was proposed.The upper and lower boundary curves with higher fitting degree are constructed,and the trend weighting is introduced into the LB_Hust distance calculation method,which reduces the difficulty of mathematical modeling and improves efficiency of equipment data similarity classification and state evaluation.
Matrix Factorization Recommendation Algorithm Based on Adaptive Weighted Samples
SHI Xiao-ling, CHEN Zhi, YANG Li-gong, SHEN Wei
Computer Science. 2019, 46 (6A): 488-492. 
Abstract PDF(2251KB) ( 394 )   
References | RelatedCitation | Metrics
Missing value estimation of sparse matrix is a necessary basic research,which is also particularly important and significant in some practical applications,such as the recommendation system.There are many methods to solve this problem,one of the most effective method to tackle this issue is Matrix Factorization (MF).However,the traditional MF algorithm has some limitations,which can only directly simulate the elements of the sparse matrix by using regression method.But it did not take into account the sample itself,which has different difficulty in regression and should be treated respectively.According to this limitation,this paper proposed a matrix factorization recommendation algorithm based on adaptive weighted samples (AWS-MF).Based on the traditional MF algorithm,the proposed method exploits the differences among the training samples and treats each sample in a bias weights.In order to improve the performance and robustness of our model,the intermediate results are combined together in the final process to obtain the objective predictions.To verify the superiority of the proposed method,the comprehensive experiments were conducted on the real-world data sets.The experiment results demonstrate that the proposed AWS-MF algorithm is able to adaptively re-weight samples according to the differences among them.Moreover,treating the samples respectively can lead to a promising performance in the real-world applications compared to the baseline methods.
Research on Recommendation Application Based on Seq2seq Model
CHEN Jun-hang, XU Xiao-ping, YANG Heng-hong
Computer Science. 2019, 46 (6A): 493-496. 
Abstract PDF(2310KB) ( 592 )   
References | RelatedCitation | Metrics
There is enormous information around us in daily basis which lead to the recommander systems to filter out the pure gold.The traditional recommander systems have been regarded as static,and lack of the research about the long or short term dependency of data.Considering the outstanding perform of recurrent neural network in tackling the sequence data,recommander system based on seq2seq model was built.The process of recommandation can be viewed as a process of sequence translation or a process of answer generation,and the model make uses of the used interactive sequence data to learn the inherent frequent patterns,then makes the prediction of other users’ actions with items.Two datasets usually used for recommender system test are involved in the experiments,which measured by the BLEU.The results show that the method can make the sequence recommendation.The model only needs the interactive data between users and items,and gets rid of the rating matrix,thus avoids the sparsity problem.
Bus Short-term Dynamic Dispatch Algorithm Based on Real-time GPS
ZHANG Shu-yu, DONG Da, XIE Bing, LIU Kai-gui
Computer Science. 2019, 46 (6A): 497-501. 
Abstract PDF(2486KB) ( 778 )   
References | RelatedCitation | Metrics
This paper analyzed the limitation of traditional bus static dispatching.By using the real-time GPS data of online buses,and analyzing the bus operation mechanism under heavy traffic jam and sudden increase in passenger flow,this paper gave a new bus short-term dynamic dispatching algorithm based on neural network.Through simulations on bus lines in Guiyang,the proposed algorithm can efficiently solve the insufficient of traditional bus static dispatching,and reduce the interference of human factors in manual scheduling,which can realize the automation and intelligence of the bus dispatching.
User Interest Recommendation Model Based on Context Awareness
LI Jian-jun, HOU Yue, YANG Yu
Computer Science. 2019, 46 (6A): 502-506. 
Abstract PDF(2142KB) ( 910 )   
References | RelatedCitation | Metrics
With the development and popularization of electronic commerce and Internet,the user-oriented personalized recommendation is getting more and more attention,the traditional user interest models only consider theuser’s own behavior towards the project,while ignoring the user’s scene at that time.Aiming at this problem,this paper proposed a user interest model based on scene perception,combined the user’s browsing behavior with the situational factors,deeply mined user interest in the project from two aspects,cleared user awareness of the project,thus accurate clustered for users,and target users based on user clustering results were recommended.Experimental results show that the accuracy of this model recommendation is higher than that of other traditional recommendation algorithms.It can better mine users’ interests,adapt to the changes of users’ interests,better solve the problem that users have no way to choose a lot of information,and improve users’ satisfaction.Therefore,it is necessary to excavate the hidden information of users from multiple perspectives so as to better provide personalized recommendations for users.
Decision Making of Course Selection Oriented by Knowledge Recommendation Service
ZHANG Wei-guo
Computer Science. 2019, 46 (6A): 507-510. 
Abstract PDF(1813KB) ( 310 )   
References | RelatedCitation | Metrics
Facing the rapid development of the Internet and the massive information resources on the Web,it is urgent to enable users to quickly find the information they want,hence the course selection oriented to knowledge recommendation service is generated .Course selection oriented to knowledge recommendation service is the core issue in the research of personalized recommendation,Based on the theory of the Apriori algorithm of association rules,this method makes use of the traditional collaborative filtering recommendation algorithm to improve Apriori algorithm.Combined with students’ majors,hobbies and academic records,constructs the model of course recommendation system as well as the personalized recommendation algorithm analysis based on this model.Through data mining in the students’ academic record database,it guides students to choose more suitable courses and helps them to learn efficiently and develop with personal characteristics.
Interdiscipline & Application
Remaining Useful Life Estimation Model for Software-Hardware Deteriorating Systems withSoftware Operational Conditions
HAN Jia-jia, ZHANG De-ping
Computer Science. 2019, 46 (6A): 511-517. 
Abstract PDF(2986KB) ( 297 )   
References | RelatedCitation | Metrics
For the estimation problem of theremaining useful life(RUL) of the software-hardware system-level,the traditional research methods consider software reliability or hardware reliability separately,and ignore the interaction effect between them.This paper proposed a new method of considering the use or operation of software as an external impact of the system based on the hardware performance degradation process.This method uses hardware performance degradation indicators to characterize the impact of software operations on the system.Discrete-time hidden Markov processes are mainly used to describe the relationship between them.Specifically,signal degradation and feature extraction techniques are applied to signal data to obtain performance degradation indicators.Hidden Markov models are used to construct the correspondence relation between implied states and actual degradation.According to the number of inflection points in the system performance degradation indicators under different software operating conditions,different degradation models are built on the same hardware degradation process,so that the model describesthe degradation process more accurately.Stochastic simulation technology and optimization technologyare used to estimate,the RUL of the hardware,and according to the system architecture,the RUL of the software-hardware system is estimated .Using the performance monitoring data of a certain weapon equipment system,this paper compared the proposed algorithm with the traditional system-level RUL estimation model (BP neural network),and proved that the proposed algorithm has higher estimation accuracy.
Many-core Optimization for Sparse Triangular Solver Under Unstructured Grids
NI Hong, LIU Xin
Computer Science. 2019, 46 (6A): 518-522. 
Abstract PDF(2821KB) ( 544 )   
References | RelatedCitation | Metrics
Sparse Triangular Solver (SpTRSV),as an important algorithm in basic linear algebraic library,has been widely used in large-scale scientific computing.In unstructured-grids,because unstructured grid have the characte-ristics of data storage disorder,data depth correlation and frequent discrete-time memory access,this algorithm is difficult to achieve effective parallelism in the many-core architecture.In this paper,based on the architecture of the domestic heterogeneous multiprocessor SW26010 architecture,a general kernel optimization method based on pipelined serial and local parallel was proposed for unstructured grid computing.This method can effectively reduce random access in unstructured grid computing,improve the computing efficiency,and have the good scalability.Based on this algorithm,multiple kernel optimization is carried out for several practical applications.The experimental results show that the method can achieve more than 3 times acceleration of the single core group and significantly reduce the running time.
Simulation Modeling of Complex Engineering Project Schedule Risk AssessmentBased on Multi Agent
YAN Gong-da, DONG Peng, WEN Hao-lin
Computer Science. 2019, 46 (6A): 523-526. 
Abstract PDF(2433KB) ( 297 )   
References | RelatedCitation | Metrics
In order to solve the problems of complex engineering projects schedule risk assessmentdue to their complex structure,long cycle and numerous risk factors,a project schedule risk assessment model based on the multi agent was established by the Anylogic software.In this model,a “process” agent,a “process flow” agent,a “risk factor” agent and a“control” agentwere were designed by taking into account the process state,transformation conditions and the internal processing behavior of the state.Multiple nested relations between the “process flow” and the “process” were formed by “risk factors”.The risk factors sensitivity analysis of a certain type of marine diesel engine maintenance project was completed through simulation experiments,then it come up with the risk factors in important processes that should be taken measures to give priority to control.Experiment results show that the model for complex engineering project schedule risk assessment has a certain reference value.
Visualization of Solid Waste Incineration Exhaust Emissions Based on Gaussian Diffusion Model
ZHENG Hong-bo, WU Bin, XU Fei, ZHANG Mei-yu, QIN Xu-jia
Computer Science. 2019, 46 (6A): 527-531. 
Abstract PDF(2805KB) ( 954 )   
References | RelatedCitation | Metrics
In order to predict the amount of waste gas emitted from solid waste incineration plants and understand the characteristics of the indicators of the pollution of the exhaust gases,a visualization system for waste gas emitted from solid waste incineration plants was designed and implemented.Based on the analysis of the main factors affecting the diffusion of waste gas,the diffusion model of waste gas in solid waste incineration plant based on Gaussian point source diffusion was established,and the method of drawing the concentration contour map of waste gas was realized based on the diffusion model.The distribution map of the solid waste incineration plant in China and the contour map of the emission concentration of solid waste incineration plant were plotted on Baidu map.Through the combination of dynamic histogram,time wheel and histogram,the real time emission of solid waste incineration plant is displayed,and the visualization of exhaust emission data of solid waste incineration plant is realized.The visualization system displays the waste gas data graphically,so as to achieve real-time monitoring of exhaust pollutants.
Cloud Computing Based Geographical Information Service Technologies
ZHANG Xin, HU Xiao-dong, WEI Jia-wei
Computer Science. 2019, 46 (6A): 532-536. 
Abstract PDF(3625KB) ( 1272 )   
References | RelatedCitation | Metrics
Based on the limitations of existing research on geographic information service technology,which is only limited to the new software deployment of existing GIS software in the cloud computing environment and the application mode of a single field,this paper proposed that the research on geographic information service technology needs to carry out in-depth research on the application mode in the cloud computing environment and deepen the fusion application of geospatial and temporal big data characterized by earth observation information.Furthermore,it analyzed the technical characteristics of cloud GIS service platform in five aspects:data management,geographic computing,geographic information mapping and terminal service,thematic application system construction,network application and service mode.By referring to and classifying and dialectically analyzing the classic theoretical and empirical literatures in this field at home and abroad,in line with the latest trend of cloud GIS research,the technical framework of cloud GIS platform integrating “store-computing-service” was designed,and five models of geographic information service based on cloud computing were proposed according to the current application status of cloud GIS platform.MongoDB was adopted as the carrier of information and business data and GridFS file system was applied as a kind of underlying heterogeneous storage.At the same time,Redis was utilized as the data exchange cache of the data engine to ensure the processing efficiency,and ZeroMQ was leveraged as the transmission middleware to develop the data engine and resource service of the “Mannger-worker” mode based on Node.js.Aiming at the problem of storage and management of massive,high-throughput,spatially-structured remote sensing image data and its basic land information products,a prototype system was developed based on MongoDB database and tested with PB data,which verified the feasibility and advancement of the research results in this paper.
Vibration Sensor Data Analysis Based on Wavelet Denoising
ZHANG Yang-feng, WEI Shi-hong, DENG Na-na, WANG Wen-rui
Computer Science. 2019, 46 (6A): 537-539. 
Abstract PDF(2270KB) ( 593 )   
References | RelatedCitation | Metrics
Aiming at the problems of signal filtering and fault signal data preservation and extraction in vibration data of mining machinery and equipment,this paper proposed a wavelet transform method based on neural network optimization threshold.The MEMS triaxial accelerometer is used to sample the digital data,then converted into displacement by the processing,and then the wavelet decomposition is performed.The high-frequency coefficients are optimized and adjusted by neural network threshold,and the data are reconstructed to achieve the effect of noise reduction.Finally,Fourier transform of filtered signals is carried out,and the ratio of high-frequency coefficients is calculated according to amplitude frequency energy.Experimental results show that the wavelet transform method based on neural network adjusting threshold can automatically adjust the threshold after adaptive learning,and has ideal filtering effect on vibration sensor signal.The high frequency noise energy can be filtered more than 15% than the traditional threshold,and can retain abrupt fault information,which provides an important basis for later fault diagnosis.
Construction of Military Corpus for Entity Annotation
ZHOU Bin-bin, ZHANG Hong-jun, ZHANG Rui, FENG Yun-tian, XU You-wei
Computer Science. 2019, 46 (6A): 540-546. 
Abstract PDF(1733KB) ( 927 )   
References | RelatedCitation | Metrics
The key to build military corpus are the identification and the marking of military corpus.For the entities of military corpus,this paper put forward a set of unified army language part-of-speech tags specification and military corpus annotation specifications,and designed a kind of automatic extension of military corpora based on the military language dictionary entity framework feature extraction.With the help of high precision classifier,the framework selects and extracts the basic features,combined with the typical features of the language set,builds the feature space.Based on the language dictionary correction for military corpora entity recognition,according to the specified annotation standard and specification of morphological marker military annotation corpus entity,the framework builds a large-scale high-quality military corpus.Experiments show that the framework can better complete corpus entity recognition and corpus annotation of the work,to do the construction of military corpus work and to recognize its function and the application prospect of widely in the military.
Enterprise Performance Evaluation Model Based on Triangular Fuzzy Multi-attribute Decision Making
ZHANG Biao, DONG Meng-yu, FAN Bei-bei
Computer Science. 2019, 46 (6A): 547-549. 
Abstract PDF(1560KB) ( 295 )   
References | RelatedCitation | Metrics
With the rapid development of information technology,a new era of economic development has being coming.Strategic management is the foundation of a company’s economic development.Establishing a long-term sustainable management evaluation model is the core competitiveness of an enterprise.Traditional performance appraisal focuses on material assets based on financial data for reasons such as shareholders’ interests.In fact,this approach does not have sufficient support.In the era of new knowledge economy,for enterprise strategic managers,the key point is not only reflecting the interests of shareholders,but also considering the needs of stakeholders,and it can be in a dominant position in the fierce competition in the future.This paper investigated the multiple attribute decision making(MADM) problems for enterprise performance evaluation with triangular fuzzy information.The triangular fuzzy weighted Einstein Bonferroni mean(TFWEBM) operator is used to develop the procedure for multiple attribute decision making under the triangular fuzzy environments.Finally,a practical example for enterprise performance evaluation was given to verify the developed approach.
Design and Research on Intelligent Teaching System Based on Deep Learning
CHEN Jin-yin, WANG Zhen, CHEN Jin-yu, CHEN Zhi-qing, ZHEN Hai-bin
Computer Science. 2019, 46 (6A): 550-554. 
Abstract PDF(2211KB) ( 1054 )   
References | RelatedCitation | Metrics
With the rapid development of deep learning,its application in education has gradually received attention.This paper introduced an intelligent teaching system based on deep learning that includes online personal learning behavior recommendation and offline bidirectional evaluation of the class quality.In the online system,based on deep lear-ning,grades prediction and online learning behavior analysis are achieved,and the image processing technology is combined to achieve learning emotion classification.In the offline system,the target detection model,face detection model and face segmentation model are trained,and the online system is combined to achieve online learning behavior feature extraction,offline grades prediction,learning regularity analysis and personal learning recommendation.The experimental results show that this system not only facilitates the access to information,but also reduces the time cost,which effectively improves the teaching efficiency of teachers and the learning efficiency of students.
Improved Deep Deterministic Policy Gradient Algorithm and Its Application in Control
Computer Science. 2019, 46 (6A): 555-557. 
Abstract PDF(1847KB) ( 402 )   
References | RelatedCitation | Metrics
Deep reinforcement learning often has the problem of low sampling efficiency.Priority sampling can improve sampling efficiency to a certain extent.The prioritized experience replay was applied to the deep deterministic policy gradient algorithm,and a small sample sorting method was proposed for the high complexity of the general prioritized experience replay algorithm.Simulation results show that the improved deep deterministic policy gradient algorithm improves the sampling efficiency and has better training effect.The algorithm is applied in the direction control of a car,compared with traditional PID control,this algorithm can avoid the problem of manual adjustment of parameters and has a wider application prospect.
Patent Analysis on Picture Display of Three Dimensional Panorama
Computer Science. 2019, 46 (6A): 558-561. 
Abstract PDF(3159KB) ( 453 )   
References | RelatedCitation | Metrics
Three Dimensional Panorama is a virtual reality technology that uses the panoramic images to represent virtual environments,also called virtual Reality panorama.This technique restores the space information of the scene by reverse projection of the panorama to the surface of the geometric body.Based on the patent database of CPRS and DWPI,the patent applications of the 3D panoramic image display technology are retrieved.The statistical analysis has been made on the amount of patent applications,the distribution of patent applications,the important applicants and the rela-ted core patents in this field,and the typical technical schemes of various technical branches are enumerated ,which provide some reference for the development of the technology in this field.
Application of Image Processing in Feature Size Detection of Wind Turbine Blade’s Flange Face
HAN Ke-kun, HU Gui-chuan, REN Jing, HE Hong-yu, LIU Jia-yin
Computer Science. 2019, 46 (6A): 562-565. 
Abstract PDF(2957KB) ( 292 )   
References | RelatedCitation | Metrics
In wind turbine blade manufacturing industry,to solve the problems of low efficiency and high cost when detecting characteristic dimensions of wind turbine blade’s flange face,this paper proposed a method based on machine vision.By building visual imaging platform,original image is obtain,and image processing operator is used to detect the aperture and center position of all bolt holes on blade’s flange face,the center to center distance between adjacent bolt holes,the circularity and central point coordinate of the bolt holes distribution circle.The proposed method not only has good stability,high efficiency and accuracy,but also meets all the test requirements and has good practical value.
Construction of Personalized Health Monitoring Platform Based on Intelligent Wearable Device
JIA Ning, LI Ying-da
Computer Science. 2019, 46 (6A): 566-570. 
Abstract PDF(2402KB) ( 805 )   
References | RelatedCitation | Metrics
Nowadays,the community health care mode dominated by prevention,health care and pre diagnosis is vulne-rable to many factors,such as professional knowledge,information technology and many other factors.In order to assist non-professional medical staff to acquire health information in time,a personalized health-monitoring platform based on wearable devices was designed.The platform involves new health field,and it integrates medical information with Internet of things and big data technology perfectly.The personalized health-monitoring platform consists of three parts:intelligent wearable equipment,terminal application and medical information processing server,which mainly includes daily health monitoring,abnormal information alarm,pathological image communication and rapid acquisition of position.Intelligent wearable devices can be used for data acquisition of body temperature,blood pressure,blood oxygen,blood sugar,ECG,location,weight and motion.The terminal applications are mainly Android App,iOS App and WeChat applet.The medical information processing server adopts Hadoop structure,uses Spark computing framework,and uses distributed database SequoiaDB to store information.The three parts can be transmitted by means of ZigBee+WIFI,GPRS or Bluetooth transmission.It is proved by experiments that the accuracy of the smart wearable device is high,and the three communication modes can switch each other under different conditions to ensure the correctness of the information storage.
Application of Reserved Format Encryption Technology in InformationProcessing of Civil Aviation Information System
LIU Jun, LI Ze-hao, SU Guo-yu, LI Jing-wen
Computer Science. 2019, 46 (6A): 571-576. 
Abstract PDF(2049KB) ( 908 )   
References | RelatedCitation | Metrics
The usage of format-preserving encryption technology in the encryption of flight information,passenger information and ticket information was studied by choosing simple and effective algorithms such as FF1 algorithm,FF3 algorithm and a more flexible hybrid format encryption algorithm IFX.The functions of high bit offset,forward expansion,numerical indirect mapping,and using ID of database as random factors are used to optimize the function of the extended algorithm.While ensuring security,integrity and authenticity of the information,the internal format of flight data is kept,and the statistical analysis of civil aviation information is implemented without decryption,so as to reduce the risk of data leakage.
Design and Improvement of Face Recognition System Based on PCA
LI Meng-xiao, YAO Shi-yuan
Computer Science. 2019, 46 (6A): 577-579. 
Abstract PDF(2454KB) ( 466 )   
References | RelatedCitation | Metrics
Principal Component Analysis (PCA) is a multivariate statistical analysis method whichuses feature vectors to analyze sample data and reduce the high-dimension of the feature vectors.In order to solve the problem of high image dimension and large amount of direct calculation when PCA method is used for face recognition,a new feature value decomposition method is adopted and the filter is used to remove the noise of the original image.The face recognition system was built on MATLAB platform,and the common PCA method and the PCA method with filtering pretreatment were compared and analyzed.The experiment proved that the system with filtering processing has certain advantages in performance andcertain reference value practical application.
Design of IoT Middleware Based on Microservices Architecture
WU Bin-feng
Computer Science. 2019, 46 (6A): 580-584. 
Abstract PDF(2499KB) ( 846 )   
References | RelatedCitation | Metrics
IoT (Internet of Things) systems based on traditional SOA (Service-Oriented Architecture) are poor in sca-lability and are hard to support heterogeneous devices with continuous integration.Moreover,IoT platforms’ interope-rability with third-party becomes crucial as the IoT ecosystem enhances.Thus this paper proposed a general IoT middleware based on microservices architecture to solve the problems mentioned above,throughoutly researched the internal components and their effects,especiallystudied the service abstraction process of heterogeneous devices and the conflict resolution mechanism in multi-user enviroment in detail.Through the flexibility of the microservice architecture and the loose coupling between services,not only heterogeneous devices but also third-party IoT systems can be integrated at runtime as services.In the last place,actual devices are used to verify the applicability of this middleware.
Research and Design of General Class Library for Cruise Missile Dynamic Simulation
ZHAO Xin-ye, YANG Guang, WANG Yi-tao, WANG Dong
Computer Science. 2019, 46 (6A): 585-588. 
Abstract PDF(2518KB) ( 501 )   
References | RelatedCitation | Metrics
Aiming at the mission requirements of cruise missile dynamic simulation,this paper studied the missile combat process and flight trajectory,and designed and developed a new generation of cruise misscile dynamic simulation with characteristics of cross-platform,versatility,flexible configuration and high scalability.Based on missile-based surface-launched attack surface ships of the YJ series,the class library is used to optimize the calculation,and its trajectory and trajectory parameters are analyzed.The results show that the trajectory simulation of this cruise missile can achieve good results.The general class library can be applied to all kinds of cruise missile dynamics simulation,also can be used as an important subsystem of the missile guidance system to complete the guidance algorithm verification and so on.
Design of Cloud Platform for Energy Internet of Things Based on LPWAN Multi-protocol
BAI Ruo-chen, PANG Chen-xin, JIA Jia, QIU Shu-guang, SHAO Jia, LU Xiao-jiao
Computer Science. 2019, 46 (6A): 589-592. 
Abstract PDF(2216KB) ( 564 )   
References | RelatedCitation | Metrics
The Energy Internet of Things has a large number of types of data,such as power generation to distribution side equipment operating situation data,user side energy data,energy new technologies and business data.In order to solve the problems of multi-protocol middleware and multi-source heterogeneous data access,based on the idea of “cloud-side integration”,the LPWAN energy IoT cloud platform was proposed.In the meantime,based on LoRa and NB-IoT network protocols,high concurrent data access and integration has conducted and provided technical reference for the design of energy IoT cloud platform.Relevant performance tests show that the cloud platform has operational stability and practicality.
Application of Clustering Analysis Algorithm in Uncertainty Decision Making
HUANG Hai-yan, LIU Xiao-ming, SUN Hua-yong, YANG Zhi-cai
Computer Science. 2019, 46 (6A): 593-597. 
Abstract PDF(2278KB) ( 385 )   
References | RelatedCitation | Metrics
In order to obtain useful decision information more quickly,combined with the development trend of artificial intelligence technology,the clustering analysis algorithm based on K-MEANS is used to analyze the clutering of decision information.The conceptual model about decision information was put forward to better describe the decision information and facilitate the information analysis and processing.Combining the specific data examples,clustering algorithms were applied to uncertainty decision making to achieve the classification of decision information to facilitate the rapid excavation of key information.Finally,the evaluation method based on clustering analysis algorithm was proposed,and the clustering information availability index was defined,which provides a measure for the clustering effect in the decision information.
Approach for Discovering Prerequisite Relationships Between User Generated Learning Resources
XIAO Kui, CHEN Zhi-xiong, LIU Guo-jun, HUANG Zhi-fang
Computer Science. 2019, 46 (6A): 598-600. 
Abstract PDF(1786KB) ( 256 )   
References | RelatedCitation | Metrics
In recent years,the number of user generated learning resources grew rapidly along with the popularity of the Internet.This paper proposed an approach for discovering prerequisite relationships between user generated learning resources based on Wikipedia.The relationships between related learning resources are find out by Wikipedia categories,article links,as well as article length.The experiment results show that these properties are useful for sequencing lear-ning resources.
FIR High Pass Digital Filter Design Based on Improved Chaos Particle Swarm Optimization Algorithm
HU Xin-nan
Computer Science. 2019, 46 (6A): 601-604. 
Abstract PDF(1879KB) ( 455 )   
References | RelatedCitation | Metrics
This paper proposed a chaos particle swarm optimization algorithm (CPSO) which combined with the weight improved to design the linear phase FIR digital filter.In this method,the minimum mean square error function is used as the fitness function,and finally the coefficient of the FIR digital filter is obtained.In order to confirm the availability of the method,CPSO algorithm was compared with the least square method and the basic PSO.The experiment results show that the FIR digital filter designed by CPSO has a better convergence,the band-pass characteristics and the stop-band characteristics.
Analysis and Establishment of Drilling Speed Prediction Model for Drilling MachineryBased on Artificial Neural Networks
LIU Sheng-wa, SUN Jun-ming, GAO Xiang, WANG Min
Computer Science. 2019, 46 (6A): 605-608. 
Abstract PDF(2087KB) ( 569 )   
References | RelatedCitation | Metrics
Various departments of Changqing Drilling Company have accumulated a great deal of drilling data in the past ten years.With the completion of the construction and operation of the enterprise cloud platform,data integration technology has been adopted to collect and standardize the data of various departments.Mining these accumulated valuable data can provide reference for making drilling plan scientifically.Accurate prediction of penetration rate plays an important role in scientific allocation of drilling resources and reducing drilling cost.This paper proposed a novel method based on neural networks for predicting the penetration rate of directional wells.The network inputs and outputs are determined by drilling experts.The network topology and network training are designed by data engineers.The experimental results show that the prediction model constructed by neural networks hashigh accuracy and can meet the needs of users under the condition of sufficient data and high data quality.