Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 46 Issue 11A, 10 November 2019
  
Intelligent Computing
Spatio-temporal Features Extraction of Traffic Based on Deep Neural Network
JING Jie, CHEN Tan, DU Wen-li, LIU Zhi-kang, YIN Hao
Computer Science. 2019, 46 (11A): 1-4. 
Abstract PDF(2449KB) ( 431 )   
References | RelatedCitation | Metrics
Autopilot is a hot research direction,and traffic congestion is a perennial social problem in China.Inthe future,traffic congestion is likely to occur on the road where self-driving vehicle and artificial driving vehicle coexist.On the basis of existing theories,this paper considered a variety of factors that may affect autopilot,including different speeds and neural networks.In order to improve the overall traffic efficiency,on the premise of maintaining safety,all self-driving vehicles should be as fast as possible to improve road efficiency and fundamentally solve traffic congestion.The feature extraction problem of this special case is different from the feature extraction of the image data,so the method of representing the road in the two-dimensional plane is used to process the three-dimensional data formed by the stacking of two-dimensional informationand hybrid neural network.The spatial and temporal features are extracted by using the depth neural network,so that the vehicle can make a better response.Finally,the system design is conbining with reinforcement learning so that it can be trained,and thus the effect of neural networks can be tested.
Attribute Sentiment Classification Towards Question-answering Text
JIANG Ming-qi, LEE Sophia Yat Mei, LIU Huan, LI Shou-shan
Computer Science. 2019, 46 (11A): 5-8. 
Abstract PDF(1806KB) ( 309 )   
References | RelatedCitation | Metrics
The goal of conditional sentiment analysis is getting the sentiment polarity of whole text,which is a coarsetask.Recently,with the improved technology,the sentiment analysis task is also refined,and the researchers hope to get sentiment polarity of given target of the text.This paper’s purpose is getting the sentiment polarity of product attribute on question-answering text.To perform attribute sentiment classification towards QA text pair,this paper proposed a novel approach based on attention mechanism.Firstly,this paper concatenated the attribute information on answer words’ vectors.Secondly,this paper leveraged LSTM models to encode the question text and answer text.Thirdly,this paper got the relation of question and answer by using attention mechanism and got the whole feature of answer.Finally,this paper got the result of whole feature by using classifier.Empirical studies demonstrate the effectiveness of the proposed approach to attribute sentiment classification towards question-answering text.
Military Domain Named Entity Recognition Based on Multi-label
SHAN Yi-dong, WANG Heng-jun, WANG Na
Computer Science. 2019, 46 (11A): 9-12. 
Abstract PDF(1651KB) ( 433 )   
References | RelatedCitation | Metrics
In order to identify military named entities in military texts,this paper classified them into six categories according to the characteristics of military named entities.On this basis,in order to further solve the problem that the multi-nested and combined composite military named entities are difficult to identify,the traditional annotation method was improved,and a multi-label annotation method was proposed.First,the compound military named entity is divided into several words,so that it becomes a combination of multiple minimum phrases,and then each part of the phrase is segmented according to its position in the named entity.On the basis of segmentation,each word in each phrase is marked with a vocabulary based on its position in the phrase.Finally,the entire label is ultimately used as the labeling result for each word in the military named entity.The experimental results show that the annotation method can enhance the recognition effect of military named entities.
Study on Semantic Topology and Supervised Word Sense Disambiguation of Polysemous Words
XIAO Rui, JIANG Jia-qi, ZHANG Yun-chun
Computer Science. 2019, 46 (11A): 13-18. 
Abstract PDF(3064KB) ( 332 )   
References | RelatedCitation | Metrics
Polysemous words have been considered as learning emphasis and major obstacles for foreign students who learn Chinese as a foreign language.Word Sense Disambiguation (WSD),which is mainly used to determine the specific meaning of polysemous words in a given context,has important application in human-computer interaction,machine translation,automatic essay scoring and other emerging applications.It is also the difficulty in teaching Chinese as a foreign language and HSK examination.The existing word sense disambiguation methods are shown to be with low accuracy,lack of corpus,simple features etc.Considering teaching Chinese as foreign language and its evaluation corpus,it is a hot research topic on building Chinese polysemous words WSD based on deep neural networks.It provides necessary technical support for achieving automatic HSK essay scoring.Existing researches assumed that semantic items are mutually independent and thus paid little attention to their evolutional relationships among items.To solve this problem,this paper firstly made research on semantics of typical Chinese polysemous words.Followed by semantic topology construction the basic semantic items and set-phrases were discriminated for supervised classification model training.Based on the semantic topology construction of polysemous words,the corpus samples were collected by web crawling.The supervised deep neural networks,including RNN,LSTM and GRU,were constructed subsequently.By analyzing the scrawled samples,both uni-directional and bi-directional neural networks were designed by choosing 30 words and 60 words length respectively.The final WSD classification models were obtained by multiple rounds of training and optimization to model parameters.The “Yisi”was chosen for example and used for WSD experiments within their contexts.The experimental results shows that RNN,LSTM and GRU all achieve average classification accuracy more than 75% while maximum accuracy is more than 94%.AUC under each model is more than 0.966,which shows its good performance on class imbalance among samples.Both uni-directional and bi-directional RNN models achieve best classification performance under different words length.
Branching Strategy Based on Weighted Decision Variable Level
WANG Meng, HE Xing-xing
Computer Science. 2019, 46 (11A): 19-22. 
Abstract PDF(1785KB) ( 239 )   
References | RelatedCitation | Metrics
In order to improve the solution efficiency of CDCL solver,for the choice of decision variable problem of the satisfiability (SAT) problem algorithm,a kind of branching strategy based on weighted decision variable level was proposed.The main idea of the new strategy is based on the boolean constraint propagation (BCP) back track and restart mechanism in the process.Firstly,the number of variables used as decision variables and the decision-making level are considered.Secondly,due to the selected number of variables and the difference in the decision-making level,the weight of variables is considered to be different.Finally,in combination with the conflict analysis process,the variables are rewarded and scored.The scores of different variables in the new strategy are compared with those in the VSIDS and EVIDS strategies.A large number of examples in SATLIB (SAT Little Information Bank) are used for experimental testing,and the results show that the new strategy can reduce the number of conflicts and the solution time (CPU),and improve the solving efficiency of the solver.
Spatio-temporal Trajectory Prediction of Power Grid Based on Double Layers Stacked Long Short-term Memory
YANG Jia-ning, HUANG Xiang-sheng, LI Zong-han, RONG Can, LIU Dao-wei
Computer Science. 2019, 46 (11A): 23-27. 
Abstract PDF(2863KB) ( 275 )   
References | RelatedCitation | Metrics
With the development of wide area measurement technology,it is important to identify transient stability in advance and take preventive control measures for safety and stability of power systems,and the spatio-temporal trajectory prediction of power systems is key.Although the traditional method of space-time trajectory prediction of power grid without system model does not depend on system model and its calculation speed is faster,the spatial topology of power grid cannot be considered in the process of prediction.In addition,in the big data environment of modern complex power grid,prediction accuracy with system model still needs to be improved compared with the method of using deep learning.Therefore,this paper proposed a space-time trajectory prediction model based on two-layer staclong short term memory and neighborhovd relationswhips.It adopts stacked long short term memory neural network,and introduces the characteristics of the first and second order nodes of the generator to be predicted into the model.The experimental results show that the root mean square error of the prediction decreases gradually for the prediction accuracy increases gradually with the support vector regression method,the recurrent neural network method,the single-layer long shortterm memory neural network method and space-time trajectory prediction methodbased on the double layers stacked long short term memory on the test set.When first-order node and second-order node are respectively introduced into space-time trajectory prediction of power grid,the prediction accuracy increases as the introduction of adjacent nodes increases.Compared with the traditional method of power grid spatio-temporal trajectory prediction,the model based on double layer stacked long short term memory and the topological relationship of neighboring nodes can better characterize the change of power grid space-time trajectory under transient scenarios,and achieve the prediction of power grid spatio-temporal trajectory more accurately.
Path Planning Based on Pulse Coupled Neural Networks with Directed Constraint
SUN Yi-bin, YANG Hui-zhen
Computer Science. 2019, 46 (11A): 28-32. 
Abstract PDF(5610KB) ( 253 )   
References | RelatedCitation | Metrics
This paper proposed a path planning method based on pulse coupled neural networks (PCNN) with directed constraint.This application does not require pre-training and is different with classical neural networks.The method combines topological maps with PCNN,and designs distance and constraints angle.In this way,the number of activated neurons is reduced,and the effectiveness of path planning is improved.Compared with the A* algorithm,the simulation results show that this path planning algorithm is faster.
Product Rating with Text Information and Hierarchical Neural Network
ZHAO Yun, WANG Zhong-qing, LI Shou-shan
Computer Science. 2019, 46 (11A): 33-37. 
Abstract PDF(2375KB) ( 233 )   
References | RelatedCitation | Metrics
Usually,the rating of the product on the website is obtained by averaging the rating of the product review,but this method relies heavily on the rating of reviews,which is not accurate enough for products with fewer reviews.Different from the traditional product scoring mechanism,this paper proposed a hierarchical neural network model for the overall scoring of products based on the text information of them,which can analyze the fair scores of products from limited reviews.In the product review,there is a hierarchical structure of [word-sentence-review-product],so the structure of three-layer GRU is used to get the representations of the sentences,reviews and products separately,so as to predict the final score of the product.In addition,this paper also makes additional output to the review layer to further improve the accuracy of the prediction.Experiments on the two prediction tasks of regression and classification show that the hierarchical structure of the model plays a crucial role in predicting the score of the product,and the score of outputting comment can further improve the prediction accuracy.
3D Tree-modeling Approach Based on Competition over Space Resources
YANG Hai-quan, WANG Yi-feng, WANG Zhi-qiang, ZHANG Zhi-wei
Computer Science. 2019, 46 (11A): 38-41. 
Abstract PDF(4661KB) ( 498 )   
References | RelatedCitation | Metrics
For the vast varieties in nature,complexity of geometric shapes and great difference structure of trees,this paper explored a tree-modeling approach based on the competition over space resources.In particular,the attraction points are randomly distributed in a certain space,and then the three-dimensional skeleton of tree is constructed by the reciprocal process between tree nodes and attraction points.The Bezier curve is utilized to optimize the skeleton of tree,and the geometric model of tree is constructed through the round table.The leaf order and shadow propagation algorithms are also used to control the distribution of leaves in the branches.By comparison with the L-system and space colonization algorithm,the experimental results present that the trees drawn by this approach have a strong sense of reality,which not only grow with avoiding obstacles effectively,but also require a small amount of data.
Method of Automatic Construction of 3D CAD Model Based on 2D Engineering Sketch
SUN Jin, SUN Chang-le, GUAN Guang-feng
Computer Science. 2019, 46 (11A): 42-46. 
Abstract PDF(3160KB) ( 343 )   
References | RelatedCitation | Metrics
In the process of product design and use,the performance analysis and maintenance of the product requires its 3D CAD model.In the design process,designers usually use AutoCAD to design the 2D engineering drawing.How to convert the 2D engineering drawing into 3D model in a short time is the key to shorten engineering cycles time and rapidly maintain product.Based on the knowledge of B-spline curve and B-spline surface theory,this paper proposed a process of automatically constructing 3D CAD model from 2D engineering drawing by using surface fitting technology,and proposed a data point screening optimization algorithm to reduce the amount of used data points.It solves the problem that the manual conversion from the 2D engineering drawings to the 3D CAD model is time-consuming and labor-intensive,and shortens the cycle of product development.Taking a ship model as example,the paper validated the feasibility of the method for the construction of model.
Fuzzy Cognitive Map Method for Forecasting Urban Water Demand
HAN Hui-jian, SONG Xin-fang, ZHANG Hui
Computer Science. 2019, 46 (11A): 47-51. 
Abstract PDF(1945KB) ( 329 )   
References | RelatedCitation | Metrics
The state data of system operation is the product of the interaction of complex factors,and the change of water demand is affected by many factors.The traditional time series prediction method has a single predictor variable,ignoring the causal relationship of various factors of the system.Therefore,this paper proposed a new prediction method,namely Fuzzy Cognitive Map (FCM),which exactly has this kind of feature.It is a fuzzy feedback reasoning mechanism with weight value,which quantifies the causal relationship between concepts and simulates the entire system is running.This paper combined the fuzzy cognitive map and the genetic algorithm to construct the urban water demand model,collected and organized the data at 2001~2010,and finally used the data at 2011~2015 for verification and test.The results show that in terms of the five-year average relative error,the nonlinear trend model is 5.91%,the BP neural network is 1.83%,and the method of this paper is 1.34%.Therefore,the prediction accuracy of this method is higher and the generalization performance is good.According to the analysis of experimental data,in the future,the management of water resources in Jinan City should properly control the water consumption of GDP and the water consumption of industrial added value,and increase the urban industrial water reuse rate and the domestic water recovery rate.This model provides a more efficient method for urban water demand forecasting and analysis.
Time Series Analysis Based on MSH-LSTM
ZHANG Xu-dong, DU Jia-hao, HUANG Yu-fang, SHI Dong-xian, MIAO Yong-wei
Computer Science. 2019, 46 (11A): 52-57. 
Abstract PDF(2896KB) ( 666 )   
References | RelatedCitation | Metrics
Nowadays,most researches in deep learning depend on the self-learning capacity of used neural network.Specifically,they focus on using as less human-knowledge priors as possible during the training step,which leads to totally “black-box” and is hard to clarify the training process semantically for researchers.In light of this situation,this paper proposes an improved structure of primitive LSTM (Multi-Scale Hierarchical Long Short-Term Memory,MSH-LSTM).It retains the common procedure widely used in neural network,combines the structure of neural network and human’s prior knowledge,enables the network to train purposefully under the guidance and solving the problem of “black-box” in a way,ultimately resulting in much better analytic results on time series data.To illustrate the effectiveness of MSH-LSTM,two groups of experiments (temperature and stock-price respectively) were carried out.Experimental results demonstrate that the proposed MSH-LSTM outperforms primitive ANN,LSTM and GRU without loss of network’s applicability.In temperature experiment,MSH-LSTM,primitive LSTM and primitive GRU use temporal information to get approximately results,which are better than primitive ANN.In stock price experiment,MSH-LSTM’ssuperiority is more obvious.The error of MAPE of MSH-LSTM is increaced by an average of 19.65%,24.35%,46.3% compared with that of primitive LSTM,GRU and ANN,respectively.
Research on Track Fitting Model Under Two-way RNN
ZHANG Jie, WANG Gang, YAO Xiao-qiang, SONG Ya-fei, ZHENG Kang-bo
Computer Science. 2019, 46 (11A): 58-61. 
Abstract PDF(4943KB) ( 261 )   
References | RelatedCitation | Metrics
The modeling of flight path fitting is always one of the key problems in the research of combat agent trai-ning.Aiming at the low precision of track fitting in current combat multi-agent simulation training,a training strategy based on improved enhanced cyclic neural network and cubic spline interpolation was proposed.Taking the pitch angle,rolling angle and yaw angle of the aircraft as the reference objects,the track in the training process is fitted based on cubic spline interpolation algorithm,the error is reduced by cyclic neural network training,and the track is fitted.A large number of simulation experiments and the final engineering practice show that the method has higher accuracy and rationality than the existing track simulation algorithms.Under the same background,the track length decreases by nearly 10 percentage points,and the accuracy is more than 5 percentage points higher than the algorithm in the same field.The proposed algorithm can effectively solve the problem of combat agent in the same background.In simulation training,the track and actual operational error are reduced.
Short-term Forecasting Model of Agricultural Product Price Index Based onLSTM-DA Neural Network
JIA Ning, ZHENG Chun-jun
Computer Science. 2019, 46 (11A): 62-65. 
Abstract PDF(2962KB) ( 471 )   
References | RelatedCitation | Metrics
The price of agricultural products has always been the key area for maintaining social and economic life.Due to the non-linear relationship between predicted prices and influencing factors of agricultural products,recurrent neural networks are suitable for time series prediction.However,for long-term span,its prediction effect is limited.According to the price characteristics of agricultural products,a neural network model of LSTM-DA (Long Short-Term Memory-Double Attention) was designed.It combines the convolutional attention network,the Long Short-Term Memory network and the attention mechanism.The attention factors of different components are extracted by the convolutional attention network,and the corresponding weights are adjusted and fed into the Long Short-Term Memory network mo-del.Based on the influence of the time series,the results are sent to the attention mechanism for weight adjustment,and finally the results are used for short-term prediction of agricultural product price index.Before the experiment,the multi-threading mechanism is used to crawl a large number of agricultural information platforms to collect a large amount of price,weather and other related data.Based on the analysis and cleaning,they are stored in a Hadoop Distri-buted File System.In the experiment,the Long Short-Term Memory network is used as the baseline.Compared with the traditional single model,this model can improve the prediction accuracy,and the predicted price index can accurately describe the overall trend of vegetable products in the next week.
Short Text Feature Extension Method Based on Bayesian Networks
LIU Hui-qing, GUO Yan-bu, LI Hong-ling, LI Wei-hua
Computer Science. 2019, 46 (11A): 66-71. 
Abstract PDF(1983KB) ( 274 )   
References | RelatedCitation | Metrics
Aiming at the problems of feature sparsity and insuffcient representation ability in short text,this paper proposed a feature extension method based on Bayesian networks.Firstly,the semantic Bayesian network is constructed by defining the dependencies between the feature words in the short texts.Then,the correlation degree is defined between the feature word and the short text,and the feature words closely related to the short text are selected.These words are further extended to the short text to reduce the noise and sparsity of short texts.Finally,this paper analyzed the feasibility and effectiveness of the proposed method with the short text classification as the basic task of text analysis.The experimental results on the Amazon product dataset show that the proposed method is feasible and effective.
Simulation Research on Offshore Vertical Replenishment Planning Based on Multi-agent
DONG Peng, WU Chong, YU Peng, WEN Hao-lin
Computer Science. 2019, 46 (11A): 72-75. 
Abstract PDF(3327KB) ( 261 )   
References | RelatedCitation | Metrics
In order to optimize the planning of vertical replenishment at sea in formation,this paper developed the optimal vertical replenishment transportation scheme.Firstly,the process of vertical replenishment at sea and the possible situation of material queuing are analyzed.Then,the multi-agent system is used to simulate and model the process of marine replenishment,and the replenishment ship,receiving ship and helicopter are established respectively.Three kinds of agents are used to build a vertical replenishment planning model at sea based on the multi-agent system.The simulation experiments and analysis of the vertical replenishment planning problems in peacetime and wartime were carried out respectively.The simulation results verify the rationality of the simulation model.
Prediction Model of E-sports Behavior Pattern Based on Attention Mechanism and LRUA Module
YU Cheng, ZHU Wan-ning, YOU Kun, ZHU Jin-fu
Computer Science. 2019, 46 (11A): 76-79. 
Abstract PDF(3289KB) ( 202 )   
References | RelatedCitation | Metrics
With the development of e-sports industry,it is more and more important to analyze data accurately and quickly.This paper studied the prediction of e-sports behavior pattern .From the perspective of measurement learning,the model inaccuracy of the prediction of e-sports behavior pattern caused by different evaluation scales of teams is improved by introducing modified cosine measure instead of cosine measure.Meanwhile,in order to further improve the accuracy of the model,this paper started from the characteristics of the data in this paper.Considering that this paper pays more attention to the content of data,LRUA module is introduced for memory access.Experiments show that the proposed model has high accuracy and low volatility.
Kernel Fractional Lower Power Adaptive Filtering Algorithm Against Impulsive Noise
DONG Qing, LIN Yun
Computer Science. 2019, 46 (11A): 80-82. 
Abstract PDF(1803KB) ( 158 )   
References | RelatedCitation | Metrics
To filter out the non-Gaussian impulsive noises,a kernel fractional lower power (KFLP) algorithm based on the fractional lower order statistics error criterion was proposed.Due to the favorable characteristics of the fractional lower order power coefficient of reciprocal,the adaptive update of the weight vector will stop automatically in the pre-sence of impulsive interference.Thus,the effect of updating the weight vector caused by the impulse interference is eliminated.Simulation results show that as the power of the cost function approaches unity,the robustness of the kernel-type low-power algorithm improves in the non-Gaussian impulsive environment.Moreover,compared with the kernel least-mean-square (KLMS) algorithm based on the mean square error criterion,the proposed algorithm has smoother convergence curve and more stable performance.
Multi-objective Grey Wolf Optimization Hybrid Adaptive Differential Evolution Mechanism
ZHAO Yun-tao, CHEN Jing-cheng, LI Wei-gang
Computer Science. 2019, 46 (11A): 83-88. 
Abstract PDF(3016KB) ( 300 )   
References | RelatedCitation | Metrics
Due to the grey wolf algorithm is easy to fall into local optimum,a multi-objective grey wolf optimization based on adaptive differential evolution mechanism was proposed.Firstly,the external archive is grouped according to the distance of the objective function value to avoid storing similar individuals.Secondly,the selection mechanism of the head wolf is adopted.Finally,differential evolution is introduced into the updating process to select the next generation of grey wolves.At the same time,the parameters of differential evolution are adaptively adjusted according to the objective value of candidate solutions,to balance the local exploitation and the global exploration performance.The experimental results show that the proposed multi-objective grey wolf optimization has better convergence and distribution than the other three algorithms.
Intelligent Bone Age Assessment Based on Deep Learning
CHI Kai-kai, CAI Rong-hui, DING Wei-long, HUAN Ruo-hong, MAO Ke-ji
Computer Science. 2019, 46 (11A): 89-93. 
Abstract PDF(3080KB) ( 602 )   
References | RelatedCitation | Metrics
The bone-ages of children and adolescents indicate their growth condition.Traditional clinical method of bone-age assessment is to observe the bone maturity of multiple particular bones inside the X-ray film of the whole left hand by the doctor’s eyes.The assessment accuracy greatly depends on the doctor’s subjective judgment ability,and the evaluation is time-consuming.At present,deep convolution neural network has been used for automated bone-age assessment based on the whole bone image of left hand.In order to improve the accuracy of bone-age assessment,this paper proposed to segment 14 specific bones used for bone-age assessment from each whole hand bone image,and then train a deep convolution neural network (AlexNet) for each one of 14 specific bones to evaluate the bone maturity level.In addition,considering that bone development is a continuous process,unlike selecting some discrete growth-level of bone in the traditional method,this paper uses the classification probabilities of the two most probable levels outputted by the automated neural network to calculate the weighted score.The test results show that the proposed method has the average bone-age error of 0.456 year and has an accuracy of 94.64% when the allowed error range which 1.0 year,which is significantly better than the automated bone-age assessment method based on the whole hand image.
Dynamic Target Following Based on Reinforcement Learning of Robot-car
XU Ji-ning, ZENG Jie
Computer Science. 2019, 46 (11A): 94-97. 
Abstract PDF(2610KB) ( 297 )   
References | RelatedCitation | Metrics
Robot path planning has always been a hot topic in robot motion control.The current path planning takes a lot of time to build the map,but the reinforcement learning based on continuous “trial and error” mechanism can realize the mapless navigation.Through the research and analysis of current various deep reinforcement learning algorithms,using low-dimensional radar data and a small amount of position information can follow a moving target and avoid collisions in indoor environments.The results show that DQN、Dueling Double DQN and DDPG algorithms based on priority sampling present strong generalization capabilities in different environment.
Improved CoreSets Construction Algorithm for Bayesian Logistic Regression
ZHANG Shi-xiang, LI Wang-geng, LI Tong, ZHU Nan-nan
Computer Science. 2019, 46 (11A): 98-102. 
Abstract PDF(2237KB) ( 255 )   
References | RelatedCitation | Metrics
With the rapid development of the Internet,new types of information dissemination methods are emerging.It leads to an explosion of data at an unprecedented rate.How to process and analyze huge raw data and turn it into usable knowledge for learning and utilization,has become an important topic of common concern for scientists and technical experts at home and abroad.The Bayesian approach provides rich hierarchical models,uncertainty quantification and prior specification,so in large-scale data background it is very attractive.The limited-iteration bisecting K-means algorithm preserves the clustering quality of the approximate standard bisecting K-means algorithm with higher computational efficiency,and it is more suitable for large data sets requiring faster processing speeds.Aiming at the low execution efficiency problem of the original coresets construction algorithm,the limited-iteration bisecting K-means algorithm is improved,making the clustering result obtained at a faster speed and the weight of the relevant data points calculated under the condition of ensuring the clustering effect,thus constructing the coresets.Experiments show that compared with the original algorithm,the improved algorithm has higher computational efficiency,similar approximation performance and better approximation effect in some cases.
Hierarchy Division of Compound Sentence with Non-saturated Relation Word via Neural Network
YANG Jin-cai, YANG Lu-lu, WANG Yan-yan, SHEN Xian-jun
Computer Science. 2019, 46 (11A): 103-107. 
Abstract PDF(1781KB) ( 168 )   
References | RelatedCitation | Metrics
Hierarchical division of a compound sentence is the basis of syntactic structure analysis and semantic discrimination.However,the ellipsis of relational markers bring difficulties to the hierarchical division of a compound sentence.This paper combined dependency syntactic trees and word2vec word vector model to extract the syntactic structure and semantic features of compound sentences,then used the neural network to train a hierarchy division model for compound sentences with non-saturated relation word,and the hierarchical division test was carried out on the complex sentences in the test set.The test accuracy of test set is 74%.
Children’s Reading Speech Evaluation Model Based on Deep Speech and Multi-layer LSTM
ZHENG Chun-jun, JIA Ning
Computer Science. 2019, 46 (11A): 108-111. 
Abstract PDF(3246KB) ( 467 )   
References | RelatedCitation | Metrics
Most modern people ignore the importance of reading.However,for children aged 5~12,reading aloud is not only an essential skill in the learning process,but also an effective means of cultivating sentiment.Since there is a nonlinear relationship between the characteristics of the spoken speech signal and the evaluation criteria,the recurrent neural network is suitable for time series prediction,but its prediction effect is limited for long-term span.According to the characteristics of children’s spoken speech and its evaluation system,a new model combining Deep Speech and three-la-yer LSTM (Long Short-Term Memory) neural network was designed.Firstly,on the basis of adding attention mechanism,the accuracy and fluency measure of speech evaluation are put forward,and the spectrum map is used as the input of feature extraction.Among them,the accuracy of reading uses the new version of Deep Speech to improve the accuracy of phoneme recognition.For fluency evaluation,the spectrogram is sent to the three-layer LSTM model to present the effects of the time series.Then,the results are sent to the attention mechanism for weight adjustment,and finally the total evaluation results are used for the evaluation of children's spoken speech.The experiment uses the children’s reading corpus,which is provided by the “export chapter” software,and the experimental environment uses the TensorFlow platform.The experimental results show that compared with the traditional model,this model can accurately judge the correctness of spoken speech and the fluency of reading aloud,and the scoring results obtained by its evaluation model are more accurate.
Feature Incremental Extreme Learning Machine
ZHAO Zhong-tang, ZHENG Xiao-dong
Computer Science. 2019, 46 (11A): 112-116. 
Abstract PDF(1777KB) ( 338 )   
References | RelatedCitation | Metrics
In different application fields of machine learning,many excellent classification models of extreme learning machine were produced.Researchers are often willing to share the structure and parameters of these models,but are reluctant to share the original training data.To solve the problem of how to use a small number of samples with new features and the existing classifier to generate a more efficient classifier,this paper proposed a feature incremental extreme learning machine,which can learn knowledge from samples with new features and improve the recognition accuracy of existing models.Test results on real world datasets show that the proposed algorithm can work effectively and improve the recognition accuracy of existing models,without the participation of previous training samples.
Traffic Signal Control Based on Double Deep Q-learning Network with Dueling Architecture
LAI Jian-hui
Computer Science. 2019, 46 (11A): 117-121. 
Abstract PDF(2106KB) ( 379 )   
References | RelatedCitation | Metrics
The intersection is the core and hub of the urban road network.Reasonable optimization of the signal control at the intersection can greatly improve the operational efficiency of the urban transportation system.Using real-time traffic information as input and dynamically adjusting the phase time of the traffic signal becomes the important direction of current research.This paper proposed a traffic signal control method based on double deep Q-learning network with Dueling Architecture (D3QN).The deep learning network is combined with the traffic signal control machine to form an intelligent agent for adjusting the signal control strategy of the intersection.Then the DTSE (Discrete Traffic State Coding) method is used to transform the traffic state of the intersection into a two-dimensional matrix composed of the position and velocity information of the vehicle.Then high-level features are captured by deep neural network,which makes accurate perception of traffic state come true.On this basis,an adaptive traffic signal control strategy is realized through reinforcement learning.Finally,the traffic micro-simulator (SUMO) is used for simulation experiments,the timing control and induction control methods are used as control experiments.The results show that the proposed method achieves better control effect and is therefore feasible and effective.
Stock Text Theme Recognition Based on Deep Fusion
ZHANG Jia-hui, CHEN Zhi-yuan, ZHAO Feng, AN Zhi-yong, XIE Qing-song
Computer Science. 2019, 46 (11A): 122-126. 
Abstract PDF(2273KB) ( 201 )   
References | RelatedCitation | Metrics
The stock market occupies an important position in the capital market and is a barometer of the economy.Experts' comments on stocks are an important basis for investors to make investment decisions.Therefore,how to quickly and effectively capture the subject information of many expert stock reviews has become a hot spot in the field of stock research.However,most stock text topic recognition algorithms currently use a single standard for their feature selection methods and classification models.In general,a single standard can only reflect the recognition of a text topic from one side,and cannot fully capture the subject's main features.In fact,different feature selection criteria and classifier models understand the text from different sides,and the captured feature information has strong complementarity.To this end,in order to improve the accuracy of the theme recognition of stock texts,this paper has a multi-faceted fusion of stock texts from the perspective of information fusion,it includes:1)Feature selection layer,which performs weighted fusion on multiple feature selection methods to enable it to fully characterize stock text features;2)The decision-making layer,based on SVM-score,performs decision-making layer fusion on multiple classifiers,which can improve the accuracy of text recognition.Experiments based on measured data show that the recognition accuracy of the multi-layer fusion algorithm proposed in this paper is significantly improved compared with the single-mode text topic recognition method.
Traffic Efficiency Analysis of Traffic Road Network Based on Percolation Theory
GAO Hua-bing, SONG Cong-cong, CHEN Bo, LIU Zhi
Computer Science. 2019, 46 (11A): 127-133. 
Abstract PDF(6359KB) ( 187 )   
References | RelatedCitation | Metrics
According to the congestion phenomenon of urban road network,the percolation theory is used to analyze the traffic efficiency of road network model.Firstly,by using the geographic data of actual urban roads,the original method is used to construct the traffic road network model.Then,by quantifying the traffic efficiency of the road network,the influence of the traffic jams on the traffic situation under different weather conditions is analyzed.This paper mainlyevaluated the traffic situation through the formulation of rules,the analysis of thresholds,the division of strong connec-ted subgraphs and the determination of traffic efficiency.The influence of weather factors on traffic network was verified under different weather conditions.
Path Planning Method of Large-scale Fire Based on Multiple Starting Points and Multiple Rescue Points
LI Shan-shan, LIU Fu-jiang, LIN Wei-hua
Computer Science. 2019, 46 (11A): 134-137. 
Abstract PDF(2099KB) ( 260 )   
References | RelatedCitation | Metrics
Aiming at the real-time path planning of joint emergency rescue with multiple starting points,multiple rescue points and multiple exits,an improved ant colony algorithm(IACA) was proposed and a combined optimization path construction method was designed.In order to improve the convergence of ant colony algorithm,this paper updated the equivalent distance between two position nodes in real time,improved the pheromone update rules,and adaptively adjusted the pheromone volatility parameters.A local search algorithm that effectively combines with ant colony algorithm was constructed to improve the ability of fast optimization presented.To solve the limitation of the single emergency rescue of traditional path planning,this paper proposes a path construction method based on combined optimization ant colony algorithm.The simulation results show that the improved ant colony algorithm based on combinatorial optimization can quickly find a set of paths from multiple starting points to multiple rescue points and back to multiple exits in real time,and its convergence speed and the shortest path are better,which can improve the rate and optimization in large emergency rescue route planning.
Chinese Named Entity Recognition Method Based on BERT
WANG Zi-niu, JIANG Meng, GAO Jian-ling, CHEN Ya-xian
Computer Science. 2019, 46 (11A): 138-142. 
Abstract PDF(1806KB) ( 2018 )   
References | RelatedCitation | Metrics
In order to solve the problems of low accuracy of traditional machine learning algorithms in Chinese entity recognition,high dependence on feature design and poor adaptability in the field,a recurrent neural network method based on bidirectional encoder representation from transformers was proposed for named entity recognition.Firstly,the BERT is trained by large-scale unlabeled corpus to obtain the abstract features of the text.Then the BiLSTM neural network is used to obtain the contextual features of the serialized text.Finally,the corresponding entities are extracted by sequence labeling with CRF.The method combines the BERT and BiLSTM-CRF models for Chinese entity recognition,and has obtained the F1 value of 94.86% on the People's Daily data set in the first half of 1998 without adding any features.Experiments show that this method improves the accuracy,recall rate and F1 value of entity recognition,indicating the effectiveness of this method.
Research on Volatility Forecasting of RMB Exchange Rate Based on Public Opinion
CHENG Zhou, YU Zheng, GUO Yi, WANG Zhi-hong
Computer Science. 2019, 46 (11A): 143-148. 
Abstract PDF(1959KB) ( 329 )   
References | RelatedCitation | Metrics
Public opinion has an impact on financial volatility,which plays an essential role in monitoring,analysis and anomaly detection for financial market.Due to the diversity of public opinion and the complexity of RMB exchange rate,how to quantify the impact of public opinion in a better manner has an important industrial significance for realizing the monitoring and analysis of the RMB exchange rate.This paper firstly performed pre-processing for public news of the foreign market,such as noise filtering,word segmentation.Meanwhile,it constructed a series of features for volatility forecasting of RMB exchange rate based on the domain knowledge of foreign exchange rate.Moreover,a novel influence model was proposed to represent the relationship between public opinion and RMB exchange rate.Finally,the volatility forecasting model is realized for RMB exchange rate on the real dataset.The experimental results testify that the proposed method can effectively forecast the volatility of RMB exchange rate.
Data Science
Database of Chinese Domestic Films for Fox-office Revenue Forecasting
SHI Zheng, XU Ming-xing
Computer Science. 2019, 46 (11A): 149-152. 
Abstract PDF(1989KB) ( 433 )   
References | RelatedCitation | Metrics
The prediction of film box-office revenues is a hot research area in the Globalfilm industry.A rich film database is the cornerstone for such research.Aiming at the gap between the film industries in China and western countries,and the limited records of Chinese domestic films,this paper established a database of Chinese domestic films for box-office revenue forecasting.Firstly,the global status of film box-office revenue forecasting research is reviewed.Secondly, the ideas and detailed procedures of establishing a database of domestic films are introduced.Finally,a comparison between the proposed database and the well-established databases of films from other countries is performed by using the same box-office revenue prediction method.The test results show that the proposed database shares a similar performance with other databases,confirming that the domestic film database is valid for forecasting box-office revenues.
Application of Active Learning in Recommendation System
ZHAO Hai-yan, WANG Jing, CHEN Qing-kui, CAO Jian
Computer Science. 2019, 46 (11A): 153-158. 
Abstract PDF(2748KB) ( 410 )   
References | RelatedCitation | Metrics
In recent years,recommender system develops very quickly and is becoming more and more mature.However,many approaches are based on an ideal assumption,i.e.,there are plenty of sample data which can help us train a mature model to predict or recommend.In actual industrial production,most users and products lack of rating information or consumption records.And datasets formed by historical accumulation are unevenly distribued,so that it is hard to learn a reliable model.Active learning considers that the benefits of each item to the system is different,so some special items can be selected through specific strategies,and the related preference information can be actively obtained by the interaction between the user and the project.Active learning applied in the recommendation system attempts to training a model with fewer but higher quality samples,which improves the user experience and protects against unbalanced data sets.The applications of active learning in recommendation system in recent years were reviewed and summarized.Future directions were also discussed in this paper.
Study on Interdisciplinary Model of Construction of Big Data Discipline in China
NING Hui-cong
Computer Science. 2019, 46 (11A): 159-162. 
Abstract PDF(1817KB) ( 323 )   
References | RelatedCitation | Metrics
With the vigorous development of new generation of information technology represented by big data,cloud computing and artificial intelligence,digital economy has become an important engine to drive China’s economic growth.It is a great significance to speed up the construction of big data discipline and train new generation of information technology talents.At present,there are many universities and research institutes at home and abroad to carry out the training of big data talents,but there is no mature model on how to carry out the construction of big data discipline.Therefore,this paper summarized the existing achievements of the construction of big data discipline at home and abroad,and used the Delphi method (expert investigation method) and case analysis method to conduct analyse.Lastly,combined with interdisciplinary research and personnel training mechanism,the interdisciplinary model of “point”“line”“lane” and “three-dimensional” in the construction of big data discipline in China was proposed to provide useful refe-rence for the interdisciplinary development research of the construction of big data discipline in our country.
Research on Relationship Between Bipartite Network Recommendation Algorithm and Collaborative Filtering Algorithm
ZHOU Bo
Computer Science. 2019, 46 (11A): 163-166. 
Abstract PDF(3217KB) ( 213 )   
References | RelatedCitation | Metrics
This paper introduced the basic principle of collaborative filtering algorithm and bipartite network recommendation algorithm,and proposed the general bipartite network recommendation algorithm.The internal relationship between the two algorithms was analyzed.The results show that collaborative filtering algorithm is a special case of the bipartite network recommendation algorithm,and bipartite network algorithm is proved to perforem better than collaborative recommendation algorithm.This research systematizes and unifies the bipartite recommendation algorithm theory and promotes the further development of recommendation algorithm.
Dynamical Network Clustering Algorithm Based on Weighting Strategy
WANG Zi-jie, ZHOU Ya-jing, LI Hui-jia
Computer Science. 2019, 46 (11A): 167-171. 
Abstract PDF(3069KB) ( 169 )   
References | RelatedCitation | Metrics
Network dynamic plays an important role in analyzing the correlation between the function properties and the topological structure.This paper proposed a novel dynamical iteration algorithm incorporating the iterative process of membership vector with weighting scheme,i.e.weighting W and tightness T.These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection.To estimate the optimal stop time of iteration,this paper utilized stability function defined as the Markov random walk auto-covariance.The algorithm is very efficient,and doesn’t need to specify the number of communities in advance,so it naturally supports overlapping communities by associating each node with a membership vector describing node’s involvement in each community.Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
Personalized Question Recommendation Based on Autoencoder and Two-step Collaborative Filtering
XIONG Hui-jun, SONG Yi-fan, ZHANG Peng, LIU Li-bo
Computer Science. 2019, 46 (11A): 172-177. 
Abstract PDF(2347KB) ( 352 )   
References | RelatedCitation | Metrics
Personalized question recommendation is an effective way to improve learning efficiency.It helps students get rid of the “Massive Questions” and has important significance to achieve adaptive teaching and promote education equity.However,most of the personalized question recommendation methods are based on collaborative filtering without focusing on the knowledge points,which causes the problem that the positioning of the recommended questions are inaccurate.In order to solve this problem,a personalized question recommendation system based on deep autoencoder and a two-step collaborative filtering was adopted in this paper.Firstly,considering students’ master degree of knowledge points,the two-step collaborative filtering question recommendation based on knowledge points is realized.Secondly,item response theory and deep autoencoder are used to predict the scores and the comprehensive scores of the students involving recommended knowledge points on the recommended questions.Finally,the prediction results are synergistically decided,the difficulty of the final personalized recommendation questions is controlled,and a list of final recommended questions in generated.Comparison experiments verify that the recommended results of the proposed recommendation method are more personalized and accurate than that of traditional question recommendation methods.
Recommendation Methods Considering User Indirect Trust and Gaussian Filling
ZHU Pei-pei, LONG Min
Computer Science. 2019, 46 (11A): 178-184. 
Abstract PDF(2251KB) ( 227 )   
References | RelatedCitation | Metrics
The existing recommendation algorithm introduces the user display trust,which can effectively improve the recommendation accuracy,but does not fully exploit the social relationship,and the indirect trust has richer potential value in the social information,further affecting the recommendation quality.Although there are related studies on indirect trust,the calculation is complicated and the path of trust transmission is not sufficient.Therefore,through the trust transfer network diagram,the ratio of each branch node to the total path node is multiplied by node-by-node to obtain the trust indirect value globally.Secondly,the information entropy is used to analyze the actual performance of the user’ssocial trust relationship,and the trust is adjusted to form the calculation model IpmTrust of indirect trust.And based on this model,a recommendation algorithm GITCF considering user indirect trust is designed.The algorithm uses the Gaussian model to fill the scoring matrix,and then uses the modified cosine to calculate the user similarity.After IpmTrust calculates the indirect trust,the user trust and the similarity are linearly weighted and merged.Finally,the improved neighbor prediction is used for recommendation.The experiment was carried out on the Matlab simulation platform.The RMSE and MAE evaluations were compared.The GITCF was compared with the exis-ting recommendation algorithms and the traditional recommendation algorithms.The GITCF is improved by nearly 7% compared with the existing recommendation recommendation,and is also higher than the trust-free ones.The experimental results show that the IpmTrust model has certain validity,and the recommended algorithm can improve the quality of recommendation results.
VID Model of Vehicles-infrastructure-driver Collaborative Control in Big Data Environment
CHENG Xian-yi, SHI Quan, ZHU Jian-xin, CHEN Feng-mei, DAI Ran-ran
Computer Science. 2019, 46 (11A): 185-188. 
Abstract PDF(2797KB) ( 329 )   
References | RelatedCitation | Metrics
Aiming at the serious redundancy in the centralized control mode of Internet of vehicles,and the high cost implementation of mutually reinforcing inmulti-source data,this paper described the VID (Vehicles-Infrastructure-driver) model of collaborative control from the perspective of big data.The model consists of perception center and distributed task execution.The unified perception center provides public perception services and integrates perception resource management,task scheduling and data collection.Vehicles-infrastructure Cooperative System (VCS),Driver-Vehicles Cooperative System and Driver Behavior Analysis perform perceptual tasks in a decentralized way.The VID model opens up the global and local loops from perception to service,and has good applicability for scenarios requiring collaborations.
Research and Application of Multi-label Learning in Intelligent Recommendation
ZHU Zhi-cheng, LIU Jia-wei, YAN Shao-hong
Computer Science. 2019, 46 (11A): 189-193. 
Abstract PDF(1922KB) ( 412 )   
References | RelatedCitation | Metrics
Collaborative filtering algorithm is used in traditional intelligent recommendation,but it can’t deal with user’srating information well.The data sparsity and extreme data influence the quality of recommendation.Therefore,the recommendation problem is transformed into a multi-label learning problem,and a complete intelligent recommendation system based on HMM model and user portrait was proposed in this paper.Firstly,different data processing mechanisms are set up to improve the generalization ability of the algorithm.Secondly,an improved HMM model with anti-Markov property is proposed to solve the problem of data sparsity.Finally,a user portrait is constructed to screen the learning experience of the HMM model and get the final recommendation service.Experimental results show that multi-label learning can effectively improve the accuracy and efficiency of intelligent recommendation.
Multilayer Perceptron Classification Algorithm Based on Spectral Clusteringand Simultaneous Two Sample Representation
LIU Shu-dong, WEI Jia-min
Computer Science. 2019, 46 (11A): 194-198. 
Abstract PDF(1621KB) ( 169 )   
References | RelatedCitation | Metrics
Classification learning from imbalanced datasets is always one of hot topics in data mining and machine lear-ning domains.Data-level,algorithm-level and ensemble solutions are three main methods so far for addressing imba-lanced learning.Undersmapling,which is one of data-level solutions,is widely utilized in many imbalanced learning scenarios.However,its drawback is discarding potentially useful majority data instances.In this paper,spectral clustering was introduced to take sample of the majority class instances so as to build simultaneous two sample representation.Firstly,all majority class instances are divided into many different clusters by spectral clustering analysis,different numbers of representative samples are extracted from different clusters according to the size of each cluster and the average distance between the minority class instances are generated simultaneous and each cluster,then two sample representation with the extracted instances are generated simultaneous from clusters and the minority class instances.The proposed method not only alleviates the issue of data explosion in simultaneous two sample representation,but also avoids the loss of useful information in random sampling.Finally,several experiments certificate its validity on nine groups of datasets from UCI.
User Collaborative Filtering Recommendation Algorithm Based on All Weighted Matrix Factorization
DENG Xiu-qin, LIU Tai-heng, LIU Fu-chun, LONG Yong-hong
Computer Science. 2019, 46 (11A): 199-203. 
Abstract PDF(3923KB) ( 200 )   
References | RelatedCitation | Metrics
Aiming at the problem that traditional user collaborative filtering recommendation algorithm equates users’ preferences for an item,a user collaborative filtering model based on all weighted matrix decomposition was proposed.Firstly,the model designs frequency sensing weights for observations,and non-uniformly designs user-oriented weights for unobserved values.Then,the weights of the observed and unobserved values are combined,and the similarity between user reputation and user relationship is determined according to the score,and the user collaborative filtering model of the fused fully weighted matrix decomposition is constructed.In order to verify the performance of the proposed recommendation algorithm,experiments were carried out on three real data sets:Douban,Epinions and Last.fm.The experimental results demonstrate that the proposed AWMF_UCFR algorithm achieves significant improvements on recommendation accuracy than MF algorithm,WRMF-UO algorithm and SoRS algorithm.
Cell Clustering Algorithm Based on MapReduce and Strongly Connected Fusion
HU Ying-shuang, LU Yi-hong
Computer Science. 2019, 46 (11A): 204-207. 
Abstract PDF(2778KB) ( 171 )   
References | RelatedCitation | Metrics
With the explosive growth of large location data,most of the traditional serial clustering algorithms can not process big data efficiently.In order to solve this problem,more and more people are studying parallel clustering algorithm.It is difficult to guarantee the clustering quality of parallel clustering algorithm,so it is important to study the algorithm of reducing the result of parallel clustering.Therefore,a grid clustering algorithm based on strongly connected fusion was proposed.Firstly,clustering result of data subsets is obtained according to the improved DBSCAN algorithm based on MapReduce.Next,the relationship between grid and cluster is analyzed and the concepts of Gird-cluster,connectivity and strong connectivity of Gird-clusters are defined.Then the connectivity weights matrix between Gird-cluster and Gird-cluster is calculated.Finally,whether to reduce two Gird-clusters or not is decided according to connectivity weight.The experimental results show that the proposed algorithm has high efficiency and high clustering quality in processing large location data.
Implementation of ETL Scheme Based on Storm Platform
LIANG Kui-kui
Computer Science. 2019, 46 (11A): 208-211. 
Abstract PDF(3278KB) ( 174 )   
References | RelatedCitation | Metrics
With the continuous development of the Internet in various fields,data begin to show the characteristics of structural diversity and volumetric quantification.In the face of the impact of massive data,how to improve the efficiency of ETL is crucial.In view of the problem of inconsistent data source and format and poor real-time data collection in “information island”,this paper proposed a vertical segmentation ETL workflow and horizontal segmentation pending data set,and established a flow-based ETL processing scheme based on Storm platform.At the same time,for the shortcomings of Storm,which is insensitive to the CPU load of the working node during task assignment,the CPU load information of the working node is recorded by the timing task to optimize the slot allocation mode of the Storm scheduler,sothat the load of the Storm cluster is more balanced.The experimental results show that the scheme can effectively improve the processing efficiency of ETL,and the system stability and processing efficiency for slot allocation optimization.
Feature Selection Method Based on Ant Colony Optimization and Random Forest
LI Guang-hua, LI Jun-qing, ZHANG Liang, XIN Yan-sen, DENG Hua-wei
Computer Science. 2019, 46 (11A): 212-215. 
Abstract PDF(1665KB) ( 479 )   
References | RelatedCitation | Metrics
In the face of massive high-dimensional data,eliminating redundant features for feature selection has become one of the important issues faced by information and science and technology today.Traditional feature selection methods are not suitable for searching the whole feature space,and their performance and accuracy are low.In this paper,a me-thod of feature selection based on ant colony optimization and random forest was proposed.This method takes the importance score of random forest as the heuristic factor of ant colony optimization,uses ant colony optimization to search intelligently,and uses the result of feature selection as the evaluation index to feedback the pheromone of ant colony in real time.Experiments show that this feature selection method can effectively reduce the number of features in data sets and improve the accuracy of data classification compared with traditional feature selection methods.
Nearest Neighbor Optimization k-means Clustering Algorithm
LIN Tao, ZHAO Can
Computer Science. 2019, 46 (11A): 216-219. 
Abstract PDF(1923KB) ( 268 )   
References | RelatedCitation | Metrics
Traditional k-means algorithms usually ignores the distribution of the data samples,assign all of them in the cluster edge position,center position,outliers to the cluster which nearest clustering center locates,in accordance with the principle of minimum distance,without considering the relationsh1ip between the data sample and other clusters.If the distance between the data sample and the other cluster is close to the minimum distance,the data sample is very close to the two clusters,obviously,the direct division menthod is not reasonable.Aiming at this problem,this paper presented a clustering algorithm optimized nearest neighbor (1NN-kmeans).Using the ideas of neighbor,assign these samples that do not firmly belong to a certain cluster to the cluster that the nearest neighbor sample belongs to.The experimental results show that 1NN effectively reduced the number of iterations and improved the clustering accuracy and finally achieved the better clustering results.
Design of Distributed News Clustering System Based on Big Data Computing Framework
LU Xian-hua, WANG Hong-jun
Computer Science. 2019, 46 (11A): 220-223. 
Abstract PDF(1876KB) ( 420 )   
References | RelatedCitation | Metrics
Rapid clustering of massive Internet news to generate hot topic is an important research direction.Aiming at several key problems of large-scale text clustering:similarity calculation,distributed clustering and clustering result summary generation,this paper designed and implemented a Spark-based distributed news clustering system.Firstly,the GPU-accelerated deep similarity algorithm is used to calculate the similarity relationship of news texts.Then the graph clustering algorithm is used for news clustering.Finally,a short title for each class is generated as the class description.Experiments show that the proposed system has high performance and good scalability,and can effectively handle hotspot clustering tasks of large-scale news.
Top-N Personalized Recommendation Algorithm Based on Tag
MA Wen-kai, LI Gui, LI Zheng-yu, HAN Zi-yang, CAO Ke-yan
Computer Science. 2019, 46 (11A): 224-229. 
Abstract PDF(2390KB) ( 416 )   
References | RelatedCitation | Metrics
With the development of Web2.0,UGC tag system is receiving more and more attention.Tag can not only reflect users’ interests,but also it can describe the innate character of item.Available tag recommendation algorithm does not considerae the influence of continuous behaviors of users.Although traditional recommendation algorithm based on Markov Chain produces recommendation through the emphasis on the research of continuous behaviors of users,it can not be appliedy to the tag recommendation of UCG due to its direct function on the two-dimensional relationships between user and item.Therefore,according to the thoughts of Markov Chain and Collaborative Filtering,an individual recommendation algorithm based on the tag could be applied.The algorithm splits three-dimensional relationships of 〈user-tag-item〉 into two two-dimensional relationships of 〈user-tag〉 and 〈tag-user〉.Firstly,the interest degree is calculated through the application of Markov Chain.Then correspondent item matched through the recommendation of tags.To raise the accuracy rating of recommendation,modeling of satisfaction is established by this tag according to the influence of tags and associated relationships among tags of items .This model is a kind of probabilistic model.At the same time of calculating the interest degree and satisfaction degree of user-tag and user-item,the thought of Collaborative Filtering is also used to complement sparse data.Compared with available algorithm,this algorithm is improved a lot in the aspects of precision and recall rate on the open data set.
Model of Music Theme Recommendation Based on Attention LSTM
JIA Ning, ZHENG Chun-jun
Computer Science. 2019, 46 (11A): 230-235. 
Abstract PDF(1939KB) ( 720 )   
References | RelatedCitation | Metrics
Aiming at the problems of low classification accuracy,long period,and difficulty in meeting the demand for theme music in people’s life,an attention mechanism and LSTM (Long Short-Term Memory) were designed.Based on the neural network model,it consists of a music theme model and a music recommendation model.On the basis of using the attention mechanism and the LSTM network to realize music emotion classification,the music theme model effectively combines the audio codebook and the topic model to achieve Discrimination of a subcategory of music topics under an emotion.In the music recommendation model,a low-level descriptor and a spectrogram are used to construct a joint representation of manual features and Convolutional Recurrent Neural Network (CRNN) features.The emotions expressed by the user’s voice are obtained,and the user is given a precise music theme recommendation by using this mo-del.In the experiment,two models were designed separately,and two different traditional models were used as the baseline.The experimental results show thatthis model not only can improve the classification accuracy of the subject,but also can accurately judge the emotion of the user’s voice data,so as to achieve the recommendation of the theme music compared with the traditional single model.
Pattern Recognition & Image Processing
Brachial Plexus Ultrasound Image Optimization Based on Deep Learning and Adaptive Contrast Enhancement
YANG Tong, ZHANG Shan-shan, JIANG Fang-zhou, LI Yi-fei, YU Ge-hao, ZHAO Di
Computer Science. 2019, 46 (11A): 236-240. 
Abstract PDF(4397KB) ( 420 )   
References | RelatedCitation | Metrics
In modern medicine,the image of the brachial plexus segmentation and recognition is optimized by contrast enhancement to help the physician identify the disease and tumor.Brachial plexus block is a commonly used method of local anesthesia in upper limb surgery and postoperative care.In order to accurately determine the position of thebrachialplexus,the hospital extensively applies ultrasound equipment to detect and locate the nervous system.This paper described the accurate recognition and segmentation of brachial plexus in ultrasound dynamic images based on deeplear-ning and neural network,and optimized the display of ultrasound images through adaptive contrast enhancement in the cut-out images.The experiment data come from the Beijing Jishuitan Hospital,which are divided into ultrasound images of patients and corresponding pictures of benign malignancies.The enhanced contrast algorithm was used to process the extracted features.The experimental results show that this algorithm enhances the contrast of the image and the accuracy of the displayed content.
Implementation and Application of Stereo Matching Method Based onImproved Multi-weight Sliding Window
DU Juan, SHEN Si-yun
Computer Science. 2019, 46 (11A): 241-245. 
Abstract PDF(2618KB) ( 187 )   
References | RelatedCitation | Metrics
The key problem of stereo vision is to obtain accurate disparity values through stereo matching algorithms.However,most existing stereo matching algorithms are unable to obtain accurate and correct disparities in low-texture regions.In this paper,in order to solve the problems of low matching accuracy of low texture regions and large computational complexity of high-precision semi-global matching algorithm,a stereo matching algorithm based on adaptive sliding window was proposed.The cost volume is calculated by AD-Census transform firstly.The shape of the aggregate window and the weight of the pixels are adjusted for different regions.The cross-scale cost aggregation framework conforming to the human visual feature is used to obtain the aggregate cost volume.Finally,the “winner take all strategy” is used to obtain the final disparity maps.Experiments show that the mismatch rate of the algorithm in low-texture regions decrease form 5.8% to 21.68%,which is lower than that of the traditional scheme,and the computation time is shorter than the semi-global algorithm.
Face Attributes in Wild Based on Deep Learning
GE Hong-kong, LUO Heng-li, DONG Jia-yuan
Computer Science. 2019, 46 (11A): 246-250. 
Abstract PDF(2792KB) ( 326 )   
References | RelatedCitation | Metrics
Faces in the wild are huge in number and more close to life,and the recognition of facial attributes is a valuable research.A face attributes recognition method named RMLARNet (Regional Multiple Layer Attributes Related Net) was proposed for faces in the wild,which explores a new feature extraction method and attributes relationship.The processing steps of this method are as follows:1)Feature extraction is based on the regional parts of image.2)Features are extracted from different layer of Inception V3,and they are concatenated to get the final face feature.3)An attributes relationship related network is used for attributes recognition.The experiment is conducted on a balanced CelebA- data set which is a subset of CelebA,and this method outperforms state-of-the art methods.
Large-scale Automatic Driving Scene Reconstruction Based on Binocular Image
LI Yin-guo, ZHOU Zhong-kui, BAI Ling
Computer Science. 2019, 46 (11A): 251-254. 
Abstract PDF(3597KB) ( 319 )   
References | RelatedCitation | Metrics
The large-scale smart driving scene reconstruction can feedback the surrounding road traffic environment information for the vehicle control system in the vehicle driving environment,and realize the visualization of the environmental information.At present,the existing three-dimensional reconstruction scheme is mainly oriented to thestructuredscene,and it is difficult to meet the real-time performance required by the smart driving system while ensuring a certain precision which can make when the three-dimensional reconstruction of the large-scale unstructured smart driving scene is performed.In order to solve this problem,a three-dimensional scene reconstruction method based on binocular vision is proposed.Firstly,by optimizing the stereo matching strategy,the stereo matching efficiency is improved,and then the uniform distance feature point extraction algorithm RSD is proposed to reduce the time consumption of 3D point cloud computing and triangulation,and the real-time performance of large-scale smart driving scene reconstruction is improved.The experimental results prove the effectiveness of this algorithm,which can be used to reconstruct the scene of large-scale smart driving scene,and can meet the demand of intelligent driving system in real-time.
Using Collinear Points Solving Intrinsic and External Parameters of Multiple Cameras
LUO Huan
Computer Science. 2019, 46 (11A): 255-259. 
Abstract PDF(1881KB) ( 199 )   
References | RelatedCitation | Metrics
The thesis used the geometric characteristics of collinear points to get the intrinsic parameters of the came-ras.Firstly,the homographic matrix between space collinear points and its image points is used to get the linear constraints of the intrinsic parameters and the intrinsic parameters for multiple cameras.Then,according to the coordinates of collinear points before and after movement in each camera,the rotation matrix and translation vector of the camera relative to the reference camera are obtained,and the outside parameters of the cameras are solved.Finally,simulation data and real image experiments show the feasibility and effectiveness of this method.
Face Clustering Algorithm Based on Context Constraints
LUO Heng-li, WANG Wen-bo, GE Hong-kong
Computer Science. 2019, 46 (11A): 260-263. 
Abstract PDF(1783KB) ( 154 )   
References | RelatedCitation | Metrics
Face clustering which aims to automatically divide face images of the same identity into the same cluster,can be applied in a wide range of applications such as face annotation,image management,etc.The traditional face clustering algorithms can achieve good precision,but low recall.To handle this issue,this paper proposed a novel clustering algorithm with triangular constraints and context constraints.The proposed algorithm based on conditional random field model takes triangular constraints as well as common context constraints into accountin images.During the clustering iteration and after preliminary clustering,maximum similarity and people co-occurrence constraints are considered to merge the initial clusters.Experimental results reveal that the proposed face clustering algorithm can group faces efficiently,and improve recall with the high precision,and accordingly enhance the overall clustering performance.
Handwritten Drawing Order Recovery Method Based on Endpoint Sequential Prediction
ZHANG Rui, ZHAN Yong-song, YANG Ming-hao
Computer Science. 2019, 46 (11A): 264-267. 
Abstract PDF(2322KB) ( 355 )   
References | RelatedCitation | Metrics
To address the problem of dynamic sequential recovery for Chinese handwritten,a handwritten drawing order recovery model based on deep learning method was designed.First,the handwritten image is preprocessed by coordinate regularization,refinement,and interruption of intersections,then the preprocessed image and the corresponding written coordinate sequence are used to generate the sample of the network.The sample consists of a static handwritten image and a heat map label containing the font writing order.The model uses an end-to-end convolutional neural work.Finally,the trained network model is used to predict the static handwritten image to get the original writing order of the font.The experimental results show that the method can effectively recovery the drawing order of handwritten fonts that less than five strokes.
Blind Image Identification Algorithm Based on HSV Quantized Color Feature and SURF Detector
HU Meng-qi, ZHENG Ji-ming
Computer Science. 2019, 46 (11A): 268-272. 
Abstract PDF(2586KB) ( 196 )   
References | RelatedCitation | Metrics
Aiming at the problem that the features extracted from the color image by existing copy-move forgery detection (CMFD) algorithms are not comprehensive and the matching time is too high,the blind identification algorithm for digital images by using quantized color features and SURF detector was studied.In the feature extraction process,the algorithm combines HSV fuzzy quantization color feature and SURF feature to form a comprehensive description,called the FCQ-SURF features,of color image content.K-Means clustering and KNN method are used to improve matching efficiency in feature matching stage.The experimental results show that the algorithm can detect and locate the colorima-ge copy-move forgery well in CASIA 1.0 and FAU color image test library.It also has a good detection effect for multiple tampering attacks and multi-region tampering of images.The experimental results demonstrate that the proposed algorithm has higher detection accuracy and better matching time for color image copy-move forgery.
Novel Normalization Algorithm for Training of Deep Neural Networks with Small Batch Sizes
WANG Yan, WU Xiao-fu
Computer Science. 2019, 46 (11A): 273-276. 
Abstract PDF(3071KB) ( 227 )   
References | RelatedCitation | Metrics
Batch Normalization (BN) algorithm has become a key ingredient of the standard toolkit for training deep neural networks.BN normalizes the input with the mean and variance computed over batches to mitigate the possible gradient explosion or disappearance during training of deep neural networks.However,the performance of BN algorithm often degrades when it is applied to small batch sizes due to inaccurate estimates of mean and variance.Batch ReNormalization (BRN) normalizes the input with the values of exponentialmoving average (EMA),reducing the dependency of the normalization algorithm on batches.This paper proposed a novel normalization algorithm with improved estimate on the moving mean and varianceby changing the initial value of EMA and adding corrections to the estimates.The experimental results show that the proposed algorithm has better performance in convergence speed and accuracy than both the standard BN and BRN algorithms.
Lamp Language Recognition Technology Based on Daytime Driving
LI Kun, LI Xiang-feng
Computer Science. 2019, 46 (11A): 277-282. 
Abstract PDF(3450KB) ( 310 )   
References | RelatedCitation | Metrics
The car lights not only have lighting functions,but also are important ways for vehicles to communicate with other vehicles while driving.In assisted driving,understanding the light message transmitted by surrounding vehicles accurately is a prerequisite for making correct driving decisions.During daytime driving,due to thechangeble environment,it is difficult to achieve good results in road measurement by matching the lights and then recognizing the lamp language.To this end,in view of the daytime driving situation,this paper proposed a method of light language recognition based on vehicle detection.In this paper,the Adaboost cascade classifier is trained to test the vehicle by using the training method of the updated sample.Based on this,the position distribution feature of the vehicle rear is used to determine the region of interest of the lights.In the RGB color space,a color segmentation algorithm is proposed,which can accurately extract the position of the lamp,judge the lighting state of the lamp on the basis of the region of interest,and eliminate the misdetection of color segmentation algorithm.This paper uses the brightness feature when the lamp is lit.The high-position brake light is used as the recognition condition of the brake light lamp language,and the historical frequency information is used as the recognition condition of the turn signal light,and the recognition of the front taillight light during daytime driving is completed.The experiment uses VS2010 and opencv3.4.9 as the algorithm implementation tool,and uses the actual measured data of the driving recorder provided by SAIC as the test data.After test,the accuracy of classifier recognition in experimental training is 93%.Compared with the traditional Adaboost classifier,the recognition accuracy is improved by about 2%,the average accuracy of the light recognition algorithm is 93%,and the average time of the algorithm is about 53ms.The test results show that the classification training method used in the experiment can improve the detection accuracy slightly,and the light recognition algorithm can accurately identify the brakes,the turn signals and the two kinds of lights simultaneously,and can basically guarantee the real-time performance.
CSI Gesture Recognition Method Based on LSTM
LIU Jia-hui, WANG Yu-jie, LEI Yi
Computer Science. 2019, 46 (11A): 283-288. 
Abstract PDF(2832KB) ( 801 )   
References | RelatedCitation | Metrics
Gesture recognition based on WiFi channel state information (CSI) has broad application prospects in human-computer interaction.At present,most methods require manual extraction of features,and the feature extraction process is cumbersome.It can only recognize gestures in a specific direction,which limits the range of people’s activities.To solve the above problems,this paper proposed a method based on Long Short-Term Memory (LSTM) training to design a CSI gesture recognition system based on LSTM.The system preprocesses the collected CSI data through such as abnormal point removal,optimal subcarrier selection and discrete wavelet variation denoising.The LSTM network trains the classification without manual extraction of gesture features.Finally,the recognition of four gestures is achieved,which are pushing,pulling,left swing and right swing in four different directions,and an average recognition accuracy of 82.75% is reached.This paper discussed the influence of the distance between sender and receiver and the size of the data set on the accuracy of gesture recognition,and compared the gestures in four directions by WiG and WiFinger.The results show that the proposed method has higher recognition effect.
Parallel Harris Feature Point Detection Algorithm
ZHU Chao, WU Su-ping
Computer Science. 2019, 46 (11A): 289-293. 
Abstract PDF(2310KB) ( 291 )   
References | RelatedCitation | Metrics
Harris Feature point detection is widely used in target recognition,tracking and 3D reconstruction.The computation of the feature point detection algorithm for big data problem is time-consuming and computation-intensive.There is a problem of large time-consuming and low efficiency in the algorithm of feature points detection with large data quantity.In the multi-CPU programming model based on OpenMP and GPU parallel environment based on CUDA and OpenCL architecture,In this paper,the parallel algorithm of the Harris feature point detection was proposed.In the comparison experiment of hallFeng image set on different platforms,the experimental results show that the multi-CPU feature point detection algorithm based on OpenMP shows good multi-core scalability,and the parallel feature point detection algorithms based on CUDA and OpenCL architecture in GPU parallel environment can obtain high speedup and good data and platform scalability,the maximum speed up can be more than 90 times,and the acceleration effect is significant.
Image Stitching Algorithm Based on ORB and Improved RANSAC
ZHANG Mei-yu, WANG Yang-yang, HOU Xiang-hui, QIN Xu-jia
Computer Science. 2019, 46 (11A): 294-298. 
Abstract PDF(3123KB) ( 370 )   
References | RelatedCitation | Metrics
There are many mismatches in traditional feature point matching,and the efficiency is not high.Aiming at mismatching,this paper proposed a method of screening based on binary mutual information.According to the mutual information of feature points,the matching of feature points is judged correctly.In addition,the feature points extracted by ORB algorithm are distributed in the region of color change,which is more centralized.The transformation matrix obtained by RANSAC algorithm is only applicable to the region of feature points distribution,which makes the stitching result error.In order to solve this problem,this paper used the improved RANASC algorithm to screen out the interior points firstly,and then used the interior points to get the new feature points.In this way,feature points can be disper-sed,and the iterative method is used to get the best transformation matrix.The results show that when binary mutual information is used to screen feature points,it improves the accuracy of matching and increases the number of feature points matching.The improved RANSAC algorithm can effectively solve the problem of few and more concentrated feature points and make the result of image mosaic more accurate.
Continuous Sign Language Sentence Recognition Based on Double Transfer Probability of Key Actions
LI Chen, HUANG Yuan-yuan, HU Zuo-jin
Computer Science. 2019, 46 (11A): 299-302. 
Abstract PDF(1608KB) ( 178 )   
References | RelatedCitation | Metrics
At present,the most difficult problem in continuous sign language recognition is how to split out the words effectively.In this paper,key actions were regarded as the basic units of sign language and an algorithm based on double transfer probability of key actions was proposed.After acquiring the sequence of basic units from continuous sign language,the boundaries of words can be effectively found by judging the intra-word and inter-word transfer relations of all adjacent basic units.Then the sequence of basic units are segmented by these boundaries and the candidate words of each group of basic units can be identified.Finally,according to the transfer probabilities between candidate words of different groups,the probability of corresponding synthetic sentence is calculated and then the final recognition result is output by the principle of maximum probability.The algorithm is easy to implement and has high execution efficiency.It can be applied to non-specific population through experimental verification.
Recognition of Chinese Finger Sign Language Based on Gray Level Co-occurrence Matrix and Fine Gaussian Support Vector Machine
JIANG Xian-wei, ZHANG Miao-xian, ZHU Zhao-song
Computer Science. 2019, 46 (11A): 303-308. 
Abstract PDF(3366KB) ( 458 )   
References | RelatedCitation | Metrics
Sign language recognition is an effective way to break the barriers between communication between deaf and hearing people.Generally,Chinese sign language can be divided into gesture language and finger language.Regional and individual differences lead to a wide variety,therefore gesture language recognition is relatively difficult,which requires constant learning and training.The finger language gives the result through the expression of the Chinese pinyin letters,which is deterministic,especially in terms of name,special meaning,and abstract expression.Most of the researches in sign language recognition concentrate on a certain gesture,focusing on key features such as hand shape,direction,position and motion trajectory,and combine some learning algorithms to improve the recognition accuracy,but neglect the most basic and reliable finger recognition.To this end,an effective method using gray level co-occurrence matrix (GLCM) and fine Gaussian support vector machine (FGSVM) was proposed to solve the problem of identifying Chinese finger sign language more accurately and effectively.The research method is as follows.Firstly,the finger sign language data set was constructed.The finger language image was directly obtained by the digital camera or got from the key frame of the video,meanwhile the hand shape was segmented from the image,and each image was adjusted to N×N specific size and converted to grayscale images.Secondly,feature extraction was performed to reduce the dimension of the intensity values in the grayscale image,and at the same time,the corresponding gray level co-occurrence matrix was created,and the enhanced data features were obtained by adjusting the parameters of inter-pixel distance and angle.Finally,the extracted image feature data were submitted to the fine Gaussian support vector machine classifier based on the 10-fold cross-validation classification.Experiments on 510 Chinese finger sign language image samples from 30 categories show that the classification accuracy based on GLCM-FGSVM is up to 92.7%,and this method can be considered as effective approach in Chinese finger sign language classification.
Real-time Detection and Recognition Algorithm of Traffic Signs Based on ST-CNN
QU Jia-bo, QIN Bo
Computer Science. 2019, 46 (11A): 309-314. 
Abstract PDF(3042KB) ( 284 )   
References | RelatedCitation | Metrics
At present,deep learning is a research hotspot based on image traffic sign detection and recognition proces-sing,and has achieved remarkable results.Aiming at the problem of traffic sign detection and recognition based on car-video,this paper proposed a real-time detection and recognition algorithm for traffic signs based on Spatiotemporal-CNN (ST-CNN).It constructs a Spatiotemporal model (STM) based on the spatiotemporal relationship between frames of image sequences,and combines the STM with Convolutional Neural Network (CNN).The experimental results show that the algorithm can detect,screen,track and identify the same traffic sign in the video image sequence.It can effectively reduce CNN data input and system resource consumption,and improve computational efficiency,while ensuring high accuracy.It satisfies the real-time requirements of traffic sign detection and recognition in video.The algorithm takes an average of 26.82 milliseconds per frame and the recognition accuracy reaches 96.94%.
Multi-focus Image Fusion Based on Fractional Differential
MAO Yi-ping, YU Lei, GUAN Ze-jin
Computer Science. 2019, 46 (11A): 315-319. 
Abstract PDF(3137KB) ( 198 )   
References | RelatedCitation | Metrics
Multi-focus image fusion uses many complementary information of the image to obtain a clear fused image.In traditional multi-scale analysis methods,image information is easily lost due to sampling and fusion strategies.In sparse representation methods,due to the lack of dictionary expression ability,the fusion details are blurred and the fusion time complexity is very high.For multi-focus image fusion method based on spatial domain method,the algorithm for measuring image activity level is very critical.A fractional differential feature is proposed to measure the activity level of the image.The algorithm first convolves the image with a fractional mask in eight directions,and then accumulates the absolute value after convolution in each direction to obtain the activity level measurement of the original image.Each metric map is then compared separately by using a sliding window technique.The sum of the windows and the large ones is regarded as the focus,and the corresponding score map is incremented by one.The decision map is obtained by the score map information.Finally,the final fused image is obtained by weighting the original image by decision graph.Through experimental comparison and analysis,this algorithm has certain advantages over traditional algorithm.
Robot Aided Lung Biopsy Positioning Mechanism Based on CT Image Guidance
LI Bo, KANG Xiao-dong, GAO Wan-chun, HONG Rui, WANG Ya-ge, ZHANG Hua-li
Computer Science. 2019, 46 (11A): 320-323. 
Abstract PDF(1844KB) ( 556 )   
References | RelatedCitation | Metrics
A new spatial localization mechanism based on CT image-guided robot-assisted percutaneous lung biopsy was proposed.Firstly,six marker points are designed and fixed on the CT examination bed at the same time to reference and locate in hardware.Secondly,the improved D-H inverse motion algorithm is used in software to guide the robot to perform percutaneous lung puncture.The simulation results show that the success rate of one-time puncture can be effectively guaranteed by using the location mechanism proposed in this paper.
Image Enhancement and Recognition Method Based on Shui-characters
YANG Xiu-zhang, XIA Huan, YU Xiao-min
Computer Science. 2019, 46 (11A): 324-328. 
Abstract PDF(4480KB) ( 307 )   
References | RelatedCitation | Metrics
With the rapid development of graphic image processing technology,image enhancement and recognition methods have been widely used in various industries.On this basis,text recognition technology has also made great progress.Aiming at the problems of shui text random brush strokes,variable fonts and more noise,this paper proposed an improved image enhancement and recognition method.The median filtering algorithm is used to reduce image noise,and the histogram equalization method is used to enhance image contrast.The binarization process is executed to extract the target text in the image,and the corrosion expansion process is executed to refine and expand the background.Finally,the improved text extraction algorithm is used to highlight the outline of the shui text,and the Sobel operator is used to extract the edge of the shui text.The simulation contrast experiment was carried out.The experimental results show that the method effectively reduces image noise,and accurately extracts shui characters.The method can be used in the fields of national character extraction and recognition,cultural relics restoration,image enhancement,etc.It is of great significance for protecting the heritage of ethnic cultural relics and carrying forward the traditional culture of ethnic minorities.
Night Vision Restoration Algorithm Based on Neural Network for Illumination Distribution Prediction
ZOU Peng, CHEN Yu-zhang, CHEN Long-biao, ZENG Zhang-fan
Computer Science. 2019, 46 (11A): 329-333. 
Abstract PDF(4070KB) ( 272 )   
References | RelatedCitation | Metrics
The illumination of the nighttime image is uneven,the overall brightness is low,the color deviation is large,and there is halo near the artificial light source.Existing deblurring models and algorithms often remove the effects of uneven illumination by estimating the illumination map in the case of uneven illumination.By combining the deep learning method with the radial basis function neural network,the illumination intensity was extracted,and the night image deblurring algorithm based on illumination estimation was proposed.For the problem of uneven illumination,the modulation transfer function (MTF) in the imaging process is calculated by estimating the illumination map.Taking the point diffusion function of the transport degrada-tion model as prior knowledge,combining the mathematical model of semi-blind image restoration method,the target image is processed to improve the quality of night vision imaging.In addition,the effectiveness of this method is verified by comparing with the traditional blind restoration method,and the image quality is improved evidently.
Head Posture Detection Based on RGB-D Image
LIU Zhen-yu, GUAN Tong
Computer Science. 2019, 46 (11A): 334-340. 
Abstract PDF(3128KB) ( 453 )   
References | RelatedCitation | Metrics
In the process of transcranial magnetic stimulation treatment,it is important to accurately and quickly detect the posture of the human head.Aiming at the problem that the head pose estimation based on two-dimensional color image is sensitive to the environment and posture,a head posture detection method combining both color image and depth image was proposed.The two-dimensional position information of the face feature points is detected by the color image,and the three-dimensional head coordinate system is defined by combining the depth information;Then,based on the existing ICP point cloud registration algorithm,a coarse registration method was proposed.The initial pose parameters are obtained by calculating the transformation relationship between the coordinate system of the head cloud,to be detected and the standard head point cloud,to protect the point cloud registration from falling into local optimum.Experiments show that the algorithm can accurately detect the head posture of the human body in a consulting room environment where the light source is uniform and sufficient,and improve the robustness of the attitude estimation when the head posture angle is large.
Network & Communication
Survey of ORAM Research in Cloud Storage
GU Chen-yang, FU Wei, LIU Jin-long, SUN Gang
Computer Science. 2019, 46 (11A): 341-347. 
Abstract PDF(3055KB) ( 544 )   
References | RelatedCitation | Metrics
In a cloud storage environment,servers and the third party can fetch information through analyzing the users’ access behaviour,which may cause threats to users’ information security.ORAM mechanism is one of the main strategies which can hide users’ visiting patterns.This mechanism can effectively conceal the corresponding relationships between the access behaviour and the visiting targets.Secure access mechainsm to hide user’s access intention is one of the main means to hide user’s access model at present.Through the study of the basic theories and the development process of the ORAM,this paper concluded the basic scheme of this mechanism and set up a SSIBT performance evaluation index system to make comparisons and analysis between the classic ORAM mechanism and its optimization scheme.Finally,possible research directions of ORAM were summarized based on the main research focus.
Node Propagation Importance Algorithm for Multi-dimensional Complex Networks
ZHANG Xin, WANG Hui-hui, YAN Pei, GUO Yang
Computer Science. 2019, 46 (11A): 348-353. 
Abstract PDF(2421KB) ( 233 )   
References | RelatedCitation | Metrics
How to measure node importance in the network topology has always been a research hotspot in the field of complex networks.Most of the existing researches are oriented to single dimensional networks.Therefore,aiming at the fact that there is often a multi dimensional coexistence in real-world network structure,the definition of dimensional similarity was proposed to measure the relationship between dimensions.Considering the impact of information attenuation on node importance in actual process of information propagation,the definition of propagation attenuation rate is given.The value of attenuation coefficient is determined by propagation non-destructive assumption on a fully connected single dimensional network and corresponding algorithm.And the node importance algorithm is given further.The small network characteristics of the complex network are utilized in the given algorithm to limit the maximum propagation hops,so that the algorithm takes into account both time efficiency and accuracy.The experimental results on the real network show that the proposed algorithm has certain advantages in accuracy and time efficiency compared with traditional node degree and node betweenness methods.
Cost-driven Workflow Data Placement Method in Hybrid Cloud Environment
HUANG Yin-hao, MA Yun, LIN Bing, YU Zhi-yong, CHEN Xing
Computer Science. 2019, 46 (11A): 354-358. 
Abstract PDF(3060KB) ( 160 )   
References | RelatedCitation | Metrics
Scientific workflow execution in hybrid cloud will generate a lot of transmission across data centers,resulting in large quantities propagation delay time and cost.In order to make a reasonable data placement of scientific workflow in hybrid cloud environment,it takes into account the advantages of public cloud and private cloud,and optimizes the cost of data placement.A data placement strategy based on genetic algorithm particle swarm optimization (GAPSO) was proposed,which considers the different characteristics between public cloud data centers and private cloud data centers such as capacity and storage cost as well as the influence of propagation delay time constraint on transmission costs and combining the advantages of genetic algorithm and particle swarm optimization algorithm,and data placement stra-tegy for scientific workflows was generated.The experimental results show that the data placement strategy based on GAPSO can effectively reduce the cost of data placement of scientific workflow in hybrid cloud.
Improvement of Anti-collision Algorithm Based on RFID Tag
HOU Pei-guo, WANG Zhi-xuan, YAN Chen
Computer Science. 2019, 46 (11A): 359-362. 
Abstract PDF(2731KB) ( 190 )   
References | RelatedCitation | Metrics
Radio Frequency Identification (RFID) technology is a key technology in the Internet of Things.To solve the multi-tag collision problem in RFID system,this paper proposed a frame time slot Aloha anti-collision algorithm based on combined chaotic map (MDFSA).Through the algorithm,the pseudo-random number obtained by the system is more uniform,the selection of labels for each time slot is more uniform.Through statistical verification,the simulation results show that the MDFSA algorithm improves the stability and efficiency of the system and reduces the number of collisions.Compared with traditional DFSA algorithm,the proposed algorithm increases the efficiency of the maximum system by 33%.As the number of tags increases,the performance of the algorithm is more stable and the advantages are more significant.It is suitable for large RFID tag rapid identification systems.
New Method of Multi-path Reliable Transmission Based on Redundancy Strategy
ZHANG Ting, ZHANG De-gan, CUI Yu-ya, CHEN Lu, GE Hui
Computer Science. 2019, 46 (11A): 363-368. 
Abstract PDF(2464KB) ( 233 )   
References | RelatedCitation | Metrics
In a wireless sensor network (WSN) with dense distribution,the data transmission process will generate a large number of conflicts,which will result in loss of transmission data and increase of transmission delay.The multi-path data transmission method can effectively reduce data loss and large transmission delay caused by collisions.The new method of redundant concurrent braided multi-path reliable transmission (RCB-MRT) was proposed in this paper.The method adopts the redundancy strategy.Firstly,it clusters the WSN,then sends the sensing data to the cluster nodes,and divides data packets that sensor nodes need to be transmitted into several sub-packages,and then forwards to the sink nodes in multi-path mode with concurrent weaving by intermediate nodes.After compared with the existing multi-path transmission methods,the experimental results show that the proposed multi-path reliable transmission method can effectively reduce data packet loss rate,reduce transmission delay and increase network lifetime,which is very useful for the application of reliable data transmission of WSN.
Sub-regional Dynamic Optimization Algorithm for Path Coverage of Single Target
JIANG Yi-bo, WANG Wei, HE Cheng-long
Computer Science. 2019, 46 (11A): 369-375. 
Abstract PDF(2778KB) ( 180 )   
References | RelatedCitation | Metrics
Target detection is an important application of wireless sensor networks.In the process of target detection,users pay more attention to the path coverage of the target on the basis of obtaining rich image information of the target.Aiming at the problem that a single target can be covered by the K-level and the sensor distribution density is minimized in the whole motion path,firstly,the theoretical minimum distribution density of the sensor was proposed by combining the directed distribution model and the single-target position mathematical prediction model.Secondly,the sub-regional dynamic optimization algorithm for path coverage of single target was designed.According to the distance between the sensor and the target,the sensors in the entire monitoring area is divided into external sensors,middle sensors and internal sensors by the algorithm to implement different rotation decisions for each type of sensor.The results of simulation show that compared with the existing algorithms,the proposed algorithm can effectively reduce the distribution cost of sensors in the monitoring area.
Accurate and Robust Algorithm for Broadband Signal DOA Estimation
XU Zheng-qin, WU Shi-qian, LIU Qing-yu
Computer Science. 2019, 46 (11A): 376-380. 
Abstract PDF(3163KB) ( 256 )   
References | RelatedCitation | Metrics
The Direction of Arrive (DOA) information of the source plays an important role in many practical applications.Therefore,it is a research hotspot to estimate the DOA accurately in the field of array signal processing.In view of the low accuracy of the traditional ISM(Incoherent Signals-subspace Method) for DOA estimation of broadband signals in a low SNR and reverberation condition,this paper presented an improved DOA estimation algorithm based on the ISM.Firstly,the wideband signal is decomposed into several sub-bands by discrete Fourier transform.Secondly,a way to construct an energy threshold is proposed by which the sub-band is filtered by the energy threshold and the sub-band with energy above the threshold is reserved.Thirdly,a covariance matrix reconstruction method is used to reconstruct the covariance matrix of each sub-band,and the DOA parameters of each sub-band are estimated by the TLS-ESPRIT algorithm.Finally,a weighting strategy is proposed to process the DOA estimates of multiple sub-bands and the final DOA is estimate accurately.The experimental results show that the proposed algorithm can effectively improve the accuracy of broadband signal DOA and has better robustness.
Analysis of Influence of Mutual Inductances on Energy Transmitting Between Receiving Coil in WRSNs
WANG Xu, LIN Zhi-gui, LIU Xiao-feng, MENG De-jun
Computer Science. 2019, 46 (11A): 381-386. 
Abstract PDF(2568KB) ( 292 )   
References | RelatedCitation | Metrics
Based on the principle of coupled magnetic resonant circuit model,this paper theoretically analyzed the influe-nce of mutual inductance between receiving coils on the energy transmitting efficiency and power of receiving nodes.The influence factors affecting the mutual inductance between the receiving coils are the distance,height and angle between the receiving coils.The relationship between three factors and energy transmitting efficiency and power of the node is established and theoretically analyzed.Through the research of the one-to-two charging process of the wireless rechargeable sensor networks based on magnetic coupling resonance technology,the relationships between the threefactors and the mutual inductance of the receiving coil,the energy transmitting efficiency of the node and the power are ana-lyzed.The results show that when the receiving coils are on the same side,the closer the relative distance between the two receiving coils is,the greater the mutual inductance is,and the stronger of the mutual inhibition energy reception of the node is.
Energy Efficient Routing Algorithm in Mobile Opportunistic Networks
YUAN Pei-yan, ZHANG Hao
Computer Science. 2019, 46 (11A): 387-392. 
Abstract PDF(3070KB) ( 167 )   
References | RelatedCitation | Metrics
Intermittently connected mobile networks generally do not have a complete path from the source to the destination during data transmission.In order to quicken data transmission,a large number of multi-copy routing protocols proposed,but most of them do not consider the energy issue.Considering the fact that mobile devices are driven by batteries and have limited energy and they will stop work if excessive energy is consumed,an energy efficient routing scheme was proposed,which uses the disjoint path to spray copies to prolong the lifetime of nodes.In addition,a two-dimensional continuous-time Markov chain (CTMC) was used to model the dissemination process of packets.Finally,the performance simulation and evaluation were carried out.The experimental results show that the proposed scheme has a great improvement in delivery rate,average transmission delay,average network overhead,energy and average hop count compared with classical works.
Directional Strong Barrier Constructing Scheme Based on Node Approximate Circle
WANG Fang-hong, LI Tao, JIN Ying-dong, HU Zhen-hao
Computer Science. 2019, 46 (11A): 393-398. 
Abstract PDF(3305KB) ( 132 )   
References | RelatedCitation | Metrics
Barrier coverage is one of the hot spots in directional wireless sensor network (DSN).In order to effectively form barrier when sensing angle is more than π,this paper designed the approximate circle model of directional nodes and proposed centralized heuristic barrier construction scheme based on approximate circle (HapC) and distributed improved barrier construction scheme based on next node sports (INSDBC) to construct directional strong barrier.In HapC,the whole network is divided into subgroup in which each approximate circle is connected,and the optimal mobile nodes are selected to connect these subgroups by Hungarian algorithm.To further decrease the number of barrier node,it reduces the staff node of sub-barrier.NSDBC maximizs the contribution of each nodes based on geometric relation of approximate circle.Moreover,it selects the node with minimum energy consumption to form barrier in turn from left to right.Simulation results show that this method can effectively constitute strong barrier coverage,and enhance the cove-rage performance of DSN.This research has a certain theoretical and practical significance for barrier coverage improvement in DSN.
Selective Network Coding Strategy Based on Packet Loss Prediction
GUO Bin, YU Dan-dan, LU Wei, HUANG Ming-he, ZENG Ya-lin
Computer Science. 2019, 46 (11A): 399-404. 
Abstract PDF(4028KB) ( 417 )   
References | RelatedCitation | Metrics
With the development of diversified wireless network access technology and large-scale equipment of multi-network interface devices,network transmission performance for multi-homed terminals is widely concerned by academic circles at home and abroad.Multi-path Transmission Control Protocol (MPTCP) is one of the classical studies,which distributes data to multiple paths in parallel way and enhances the performance of transmission.However,in heterogeneous wireless networks,due to the characteristic of large difference in path,it is easy to cause many problems including serious packet disorder and huge impact on MPTCP performance.Therefore,many scholars proposed that using Network Coding (MPTCP-NC) to compensate for this defect,which effectively improves the robustness of network transmission.Furthermore,the frequent generation and calculation of coding coefficients additionally increases the MPTCP transmission delay and wastes the limited bandwidth resources.Aiming at solving these problems,this paper proposed an MPTCP’s Selective Network Coding (MPTCP-SNC) based on packet loss prediction.MPTCP-SNC fully considers the differences of heterogeneous wireless network environment and selectively performs network coding according to the loss rate states of links,which reduces the extra consumption caused by blind network coding and improves the transmission performance of MPTCP.
Information Security
Security of User Access to Single SSID Wireless Network
WANG Li, XIA Ming-shan, WEI Zhan-chen, QI Fa-zhi, CHEN Gang
Computer Science. 2019, 46 (11A): 405-408. 
Abstract PDF(3824KB) ( 147 )   
References | RelatedCitation | Metrics
Aiming at the security problems existing in the current single SSID wireless network that different identity users who are authenticated and authorized can access the wireless network anytime and anywhere,resulting in the same use of wireless networks by users with different identities, such as bandwidth,access control (ACL) and so on,this paper proposed a solution that grouping users access to the wireless network based on 802.1 X and VLAN technology,and implemented the solution with FreeRADIUS technology.The deployment experiment of the solution proves that when users of different identities acces the same wireless network,the proposed scheme can set different access policies,which effectively improves the security and simplifies the management of wireless network.
Forward-secure RSA-based Multi-server Authentication Protocol
DU Hao-rui, CHEN Jian-hua, QI Ming-ping, PENG Cong, FAN Qing
Computer Science. 2019, 46 (11A): 409-413. 
Abstract PDF(2522KB) ( 183 )   
References | RelatedCitation | Metrics
The design of secure and practical key agreement protocol under multi-server is a hot topic in the field of information security.Based on the general principles of protocol design,this paper discussed the research of an anonymous multi-server key authentication protocol scheme based on biological characteristics designed by Wang et al.It pointed out that server counterfeiting attack,smart card loss attack and session key leakage attack can be realized in this protocol.At the same time,due to the failure of user anonymity,it is easy to leak user privacy,so it is not suitable for practical application.To remedy these shortcomings,a key improvement protocol based on RSA was proposed.In the registration stage,RC and server share different keys and time markers,which can effectively resist server counterfeiting attacks and achieve anonymity and untraceable ability.In the login phase,the protocol uses public key technology to rea-lize the login and forward security of users’ dynamic identity.In the authentication stage,the protocol includes three times of mutual authentication,does freshness detection of messages,and realizes mutual authentication to prevent replay attacks and so on.Finally,the security analysis and efficiency analysis of the possible attacks prove that the improved protocol can resist the attacks of losing smart card,anonymity and so on.At the same time,it maintains a simple operation.
Modeling of Jamming Attack and Performance Analysis in Multi-hop Wireless Network
LIANG Tao, WANG Tong-xiang, LIU Jian-wei, YANG Jing
Computer Science. 2019, 46 (11A): 414-416. 
Abstract PDF(1993KB) ( 206 )   
References | RelatedCitation | Metrics
Multi-hop Wireless Network (MHWN) can be easily attacked by adversaries due to its shared nature and open access to the wireless medium.In order to reveal the character of jamming attack and analyze its impact on the network performance,three typical jamming attacks were studied in this paper.According to the stochastic geometry and jamming strategies,the memoryless jammer random jammer,and reactive jammer are modeled at first.Then,the power of jamming signals is calculated based on the jamming model and theoretical results of collision probability and average access delay for the network nodes are derived.At last,a series of the numerical tests are conducted.
Fine-grained Control Flow Integrity Method on Binaries
SIDIKE Pa-erhatijiang, MA Jian-feng, SUN Cong
Computer Science. 2019, 46 (11A): 417-420. 
Abstract PDF(3375KB) ( 321 )   
References | RelatedCitation | Metrics
Control flow integrity (CFI) is a security technology to prevent control flow hijacking attacks.Most of exis-ting CFI solutions implement coarse-grained control flow integrity due to the performance overhead.This papere presented a fine-grained control flow integrity protection scheme on binaries called Bincon.Bincon extracts control flow information from the target binary by static analysis.Checking codesis implanted at the place where the control flow transfers,and the validity of control flow transfers is judged according tostatic analysis data.For indirect function calls,the target binary is analyzed in depth and the function prototype and call site signature are reconstructed based on the state information of parameter registers and function return value register.Call sites are mapped to the type-compatible functions to reduce the number of valid targets of indirect call sites.Compared with the compiler-based scheme Picon,the experimental results show that the proposed scheme significantly reduces the time overhead,while limiting the precision loss without the source code.
Android Malicious Application Detection Based on Improved Artificial Bee Colony Algorithms
XU Kai-yong, XIAO Jing-xu, GUO Song, DAI Le-yu, DUAN Jia-liang
Computer Science. 2019, 46 (11A): 421-427. 
Abstract PDF(2068KB) ( 186 )   
References | RelatedCitation | Metrics
With the rapid development of the Internet and mobile terminals,there are a lot of important information stored in mobile phones.An important way to ensure that these information is not compromised is to detect and process malicious applications in mobile phones.Before detecting malicious applications,feature extraction is required for samples,and how to effectively select features among many features is a crucial process in malicious application detection.Based on the application of Android platform,this paper established an Android malicious application detection model based on the improved artificial bee colony algorithm.By effectively selecting the features,the feature combination that optimizes the classification results is finally obtained,thereby improving the detection performance of Android malicious application detection.The Android application features are extracted under static and dynamic conditions respectively.The malicious application detection model is tested by various classification algorithms.It is proved that the proposed malicious malicious detection method based on the improved artificial bee colony algorithm has the feasibility and superiority.
Research on Security Risk Assessment Method of State Grid Edge Computing Information System
ZHAN Xiong, GUO Hao, HE Xiao-yun, LIU Zhou-bin, SUN Xue-jie, CHEN Hong-song
Computer Science. 2019, 46 (11A): 428-432. 
Abstract PDF(1829KB) ( 422 )   
References | RelatedCitation | Metrics
Based on the risk assessment theory,this paper proposed a risk analytic method based on fuzzy analytic hiera-rchy process for State Grid Corporation of China Edge Computing Information System.The security assessment items of five aspects of equipment layer,data layer,network layer,application layer and management layer are given.On the basis of this,for the aspect of network security,the importance degree of the evaluation item is compared by analytichie-rarchy process.And then combined with fuzzy comprehensive evaluation matrix,the overall security evaluation value of network security is calculated,and risk assessment on network security is conducted,andthe security assessment results are compared in different scenarios.Finally,the Microsoft threat modeling tool is used to construct the State Grid Corporation of China Edge Computing Information System threat model,which is used to analyze and reinforce the risk.
Network Security Situation Forecast Based on Differential WGAN
WGAN Ting-ting, ZHU Jiang
Computer Science. 2019, 46 (11A): 433-437. 
Abstract PDF(2162KB) ( 211 )   
References | RelatedCitation | Metrics
A network security posture prediction mechanism based on differential WGAN(Wasserstein- GAN) is presented in this paper.This mechanism uses Generative adversarial network (GAN) to simulate the development process of the situation,and realizes the situation forecast from the time Dimension.In order to solve the problem of difficult network training,collapse mode and gradient instability of GAN,this paper put forward the method by using Wasserstein distance as the loss function of GAN and adding the difference term in the loss function,to improve the classification precision of the situation value.The stability of the differential WGAN network was also proved.Experimental andanalysis results show that this mechanism has advantages over other mechanisms in terms of convergence,accuracy and complexity.
Management Information System Based on Mimic Defense
CHANG Xiao-lin, FAN Yong-wen, ZHU Wei-jun, LIU Yang
Computer Science. 2019, 46 (11A): 438-441. 
Abstract PDF(2164KB) ( 283 )   
References | RelatedCitation | Metrics
Safety management information system(MIS) affects the normal operations of many enterprises and organizations.In view of the shortcomings of existing security protection methods for information management systems,this paper proposed a management information system based on Mimic Defense(Mimic Management Information Systems,MMIS).Firstly,redundant execution set is constructed for presentation layer,business logic layer and data servicela-yer.Secondly,execution set is dynamically scheduled by dynamic configurator.Finally,execution set is voted by voter.The simulation results show that MMIS has higher security than traditional MIS.
Custom User Anomaly Behavior Detection Based on Deep Neural Network
CHEN Sheng, ZHU Guo-sheng, QI Xiao-yun, LEI Long-fei, WU Shan-chao, WU Meng-yu
Computer Science. 2019, 46 (11A): 442-445. 
Abstract PDF(2828KB) ( 242 )   
References | RelatedCitation | Metrics
In the network environment of big data,the method of detecting the abnormal behavior of the traditional user have the question that it can not meet the massive data detection requirements,can not respond to the constantly updated abnormal behavior and malware quickly and does not consider the user behavior management and other issues,so that the accuracy and stability of the abnormal detection is insufficient.Combining the technology of network traffic analysis,this paper proposed a custom model of the abnormal user behavior detection based on deep neural network,which realizes fine-grained analysis of network traffic and customizes user behavior management settings to make user anomaly detection more closely integrated with the needs of specific network environments.The data of network traffic analysis was used as the input vector of the deep neural network algorithm to realize massive data detection and custom user behavior management,and detect unknown abnormal behavior.The experimental results show that the proposed method has high accuracy and robustness,can effectively implement custom user behavior management,and solve the shortage of the traditional user anomalies.
Personnel Identification System Based on Mobile Police
CAI Yu-xin, GONG Si-liang, YANG Ming, TANG Zhi-wei, ZHAO Bo
Computer Science. 2019, 46 (11A): 446-449. 
Abstract PDF(2892KB) ( 403 )   
References | RelatedCitation | Metrics
Under the conditions of informationization and dynamic society,how to maintain public security and strengthengrassroots infrastructure has become an urgent issue for the public security organs.This paper combined the needs of the actual police to improve the public security organs’anti-terrorism stability,major activities security and public secu-rity prevention capabilities,and built a “cloud”-“pipe”-“end”mobile identity police verification system based on the advanced technologies of public security network and security access.The systemimplements various forms of mobile IP terminal security access mechanisms and information security protection strategies for different application scenarios,not only reduces the cost of public security-related business,but also improves work efficiency,createing certain and social benefits.
Spatial Encryption Algorithm Based on Double Chaos and Color Image
LV Dong-mei, LI Guo-dong
Computer Science. 2019, 46 (11A): 450-454. 
Abstract PDF(3229KB) ( 164 )   
References | RelatedCitation | Metrics
Aiming at the shortcomings of single character of chaotic sequences in a large number of color image encryption algorithms and the independence of encryption algorithms,a spatial encryption algorithm based on double chaos and color images was designed.The algorithm can be decomposed into three layers by using color images.A feature that uses an associated (parallel scrambling-ordered diffusion) encryption mode for the three-layer component,making it difficult to simply singulate any one of the components of the color image,and combining multiple sets of initial conditions with Henon mapping and four-dimensional hyperchaos Generate complex chaotic sequences for use in the encryption process.The simulation results of the encryption algorithm show that the NPCR is 99.6108%,the UACI is 33.4606%,the information entropy is 7.9975,and the key space is 1016×21.The algorithm key space is large enough to resist exhaustive attacks,and the analysis results show that the algorithm can resist differential attacks.And the proposed algorithm can effectively solve the singularity of chaotic sequence and the independence of encryption algorithm,which makes.Compared with other similar color image encryption algorithms,the encryption algorithm in this paper is more secure.
Attack Detection Method for Electricity Information Collection System Based on Virtual Honeynet
CAO Kang-hua, DONG Wei-wei, WANG Jin-liang, ZHOU Lin, WANG Yong
Computer Science. 2019, 46 (11A): 455-459. 
Abstract PDF(3309KB) ( 311 )   
References | RelatedCitation | Metrics
The Advanced Measurement System (AMI) is the basis for smart grid systems to measure,collect,store,analyze and manipulate user-consumed data.The communication and data transfer requirements between consumers (smart meters) and utilities significantly reduce the security of AMI.The electricity information collection system uses a variety of communication methods,communication protocols and new intelligent collection terminals.Therefore,the network attacks faced by the electricity information collection system are extremely frequent.Since the system currently focuses on the uplink rate of the acquisition terminal and the connectivity of the communication channel,there is a lack of corresponding security protection measures.Aiming at the above problems,the deployment scheme of the virtual honeynet on the power information collection system was designed and implemented,which solves the problem of waste of traditional honeynet hardware resources.At the same time,the data control algorithm is designed to detect the data packet,which effectively solves the control problem of attack traffic.Finally,the penetration attack test was carried out,and the experimental results are analyzed by combining the three core functions of the honeynet,which show that the scheme can effectively detect the attack.
Dynamical Management Technology of Multi-Level Security Domain for Embedded Operating System Based on MILS
GAO Sha-sha, WANG Zhong-hua
Computer Science. 2019, 46 (11A): 460-463. 
Abstract PDF(1918KB) ( 209 )   
References | RelatedCitation | Metrics
The embedded operating system based on MILS architecture can achieve security isolation of data from different application partitions.However,the existing embedded operating systems based on MILS architecture can not meet the need of secure migration,and cannot complete tasks’ functional reconstruction and real-time dynamic loading after the failure of task.Therefore,on the basis of analyzing the advantages and disadvantages of the existing embedded operating systems based on MILS,a task-oriented multi-level security domain management architecture was proposed.Besides,the working principle of each functional module in the architecture was described in detail,which can ensure the dynamic migration and functional reconstruction within a specific security domain.
EMD-based Anomaly Detection for Network Traffic in Power Plants
ZHAO Bo, ZHANG Hua-feng, ZHANG Xun, ZHAO Jin-xiong, SUN Bi-ying, YUAN Hui
Computer Science. 2019, 46 (11A): 464-468. 
Abstract PDF(2579KB) ( 276 )   
References | RelatedCitation | Metrics
Aiming at the security threat detection requirements of new energy power plant network,and the problems of poor adaptive ability,more manual participation and false positives of existing network security anomaly detection me-thods,an adaptive real-time anomaly detection method based on Empirical Mode Decomposition (EMD) was proposed.Firstly,this method characterizes the traffic in the new energy power plant network in dimensions,and establishes the traffic metrics model.Then,the traffic mettrics are decomposed by adaptive EMD,variance calculation,Gauss fitting and threshold determination,and the adaptive anomaly detection and security alarm are realized.Typical attack datasets are used to compare this method and the anomaly detection method based on wavelet transform.The test results show that this method can identify the unknown traffic anomaly accurately,real-time and adaptively.The detection effect is better than the anomaly detection method based on wavelet transform in terms of accuracy and false positives.
Chaotic System Image Encryption Algorithm Based on Dynamic Parameter Control
WANG Li-juan, LI Guo-dong, LV Dong-mei
Computer Science. 2019, 46 (11A): 469-472. 
Abstract PDF(2847KB) ( 197 )   
References | RelatedCitation | Metrics
To solve the problems of simple structure,low security and strong correlation of single chaotic system,a ch-aotic system image encryption algorithm based on dynamic parameter control was proposed.Firstly,a new chaotic system (LCT) is constructed to generate chaotic sequences,which are sorted to get a set of sequences,and the original positions are indexed to get a set of position index sequences,and image moments are obtained.The array is scrambled according to the sequence of index positions obtained.Secondly,a new method of generating pseudo-random sequence is designed by using two chaotic sequences obtained from Henon chaotic map,and a new chaotic sequence is obtained.The new ch-aotic sequence and the scrambled image are XOR processed to obtain the final ciphertext image.Experiment results show that correlation between ciphertext image and plaintext image is small,and the test values of NPCR and UACI are 99.6414%,99.6380% and 33.3869%,33.3852%,respectively for two pixel values of plaintext image,which are close to the theoretical value;the entropy value of plaintext image is 7.4416,and that of ciphertext image is 7.9889.Therefore,the algorithm has strong robustness,reliable security,and can effectively improve the anti-attack ability of encryption system.
Study on Intrusion Detection Based on PCA-LSTM
GAO Zhong-shi, SU Yang , LIU Yu-dong
Computer Science. 2019, 46 (11A): 473-476. 
Abstract PDF(2854KB) ( 472 )   
References | RelatedCitation | Metrics
At present,concealed attacks such as exploit,generics,SQL injection and APT are becoming more and more serious,and shallow machine learning is no longer a good way to detect these hidden forms of attack.In this paper,an intrusion detection model based on principal component analysis optimization for long and short time memory networks was designed.The main principle is to remove the noise information in the sample data through principal component analysis,and utilize the memory function of long and short memory networks and the powerful sequence data learning ability.The UNSW-NB15 data set established by Australian Network Cyber Center is adopted to conduct experimental analysis by adjusting the key parameters time-steps,learning rate and activation function.The results show that this model has higher accuracy than traditional model.
Cryptographic Algorithm Based on Combination of Logistic and Hyperchaos
HAN Xue-juan, LI Guo-dong, WANG Si-xiu
Computer Science. 2019, 46 (11A): 477-482. 
Abstract PDF(4886KB) ( 241 )   
References | RelatedCitation | Metrics
In this paper,a scheme of scrambling was designed,and four-dimensional hyperchaos was improved. Based on the chaos method and hyperchaos,the algorithm of double chaos image encryption was proposed. The scrambled method uses Logistic map to scramble the image twice,then takes the scrambled ciphertext image as input,and diffuses it with the improved 4d hyperchaos to get the final ciphertext image. The algorithm designed in this paper combines the image block scrambling and the whole row and column scrambling to ensure the pixel scrambling rate of nearly 100%. In the process of diffusion,the key stream with more pseudo-randomness is generated through improved hyperchaos for multiple encryption,so that the plaintext information can be well hidden. The simulation results show that the algorithm has good encryption effect,not only has strong sensitivity and large key space,but also can resist attack effectively,and has certain application value in image information security.
Security Analysis and Optimization of Hyper-chaotic Color Image Encryption Algorithm
ZHAO Fang-zheng, LI Cheng-hai, LIU Chen, SONG Ya-fei
Computer Science. 2019, 46 (11A): 483-487. 
Abstract PDF(4368KB) ( 207 )   
References | RelatedCitation | Metrics
According that the current color image encryption algorithms of “scrambling- diffusion” mode have many problems,such as the small key space,the tedious encryption process,the security vulnerability and so on,this paper proposed a new color image encryption algorithm based on the hyperchaotic system adopting “transforming- scrambling- diffusion” model.Before scrambling,it first caclulates the number of iterations in accordance with the image itself attri-butes,excutes gray code iterative transformation for all the pixel values of color image,then the chaotic sequence generated from the four-dimensional hyper-chaotic system and pixel matrix converted to gray code are transformed to one dimensional matrixes.The former is sorted and the later change correspondingly to complete the image pixel matrix of the whole domain scrambling.And then,bit operation is executed to complete image diffusion,the ciphertext is obtained by matrix transformation.The key sensitivity histogram information entropy correlation and other evaluation indexes are calculated and analyzed through the simulation experiment,and compared with other algorithms,proving that the encryption algorithm has strong anti-attack ability.
Frequency Domain Adaptive Image Encryption Algorithm Based on Fractional Order Chen Hyperchaos
LIANG Yan-hui, LI Guo-dong, WANG Ai-yan
Computer Science. 2019, 46 (11A): 488-492. 
Abstract PDF(3602KB) ( 206 )   
References | RelatedCitation | Metrics
With the development and prosperity of Internet technology,the dissemination and application of digital images are more and more extensive,and the security of digital images is also paid more and more attention.In image encryption algorithm,scrambling-diffusion encryption algorithm is widely used,because it conforms to the characteristics of two-dimensional distribution of image data.However,the common scrambling-diffusion encryption algorithm has some problems,such as low security and low encryption efficiency.Thereofre,fractional order Chen hyperchaos is used to scramble in frequency domain,then hyperchaotic S-box is designed to replace it.Finally,bi-directional exclusive or cyclic left-shift diffusion is used to achieve a complete encryption process combining frequency domain with spatial domain,scrambling,substitution and diffusion.The algorithm has large key space,high key sensitivity,uniform statistical histogram of ciphertext,low correlation between adjacent pixels of ciphertext,high security and strong resistance to differential attack,and r is close to ideal value.The algorithm can achieve the same security level as the previous image encryption algorithm only through three iterations,and the encryption efficiency is significantly improved.
Study on SM4 Differential Fault Attack Under Extended Fault Injection Range
ZHU Ren-jie
Computer Science. 2019, 46 (11A): 493-495. 
Abstract PDF(2740KB) ( 324 )   
References | RelatedCitation | Metrics
In order to make the differential fault attack on SM4 block cipher easier to implement under real conditions,various methods were studied and analyzed in depth for SM4 differential fault attack in this paper.Among the existing fault attack methods,this paper proposed a new attack method,which allow the scope of fault injection to extend to the 26th round of encryption algorithm.The limitation is removed that the fault must be injected into the last four rounds of encryption algorithm in the previous attack methods,and the purpose is achieved than expanding the fault injection range.
Method for Unknown Insider Threat Detection with Small Samples
WANG Yi-feng, GUO Yuan-bo, LI Tao, KONG Jing
Computer Science. 2019, 46 (11A): 496-501. 
Abstract PDF(1767KB) ( 427 )   
References | RelatedCitation | Metrics
Few insider threats are usually covered by a mass of normal data.It is difficult for traditional anomaly detection method based on machine learning to detect insider threats because of lacking in sufficient labeled data.To detect these unknown insider threats with small samples,this paper proposed a method based on prototypical networks witch used Long Short Term Memory networks to extract the features of user behavior data and updated parameters by meta learning.This method uses cosine similarity to classify new class samples which are not seen in training set.The experimental results with generated data based on CMU-CERT dataset finally show that the proposed method is effective,and the classification accuracy of detecting unknown insider threat is 88%.
Image Encryption Algorithm of Chaotic Cellular Automata Based on Fractional Hyperchaos
LIANG Yan-hui, LI Guo-dong
Computer Science. 2019, 46 (11A): 502-506. 
Abstract PDF(3559KB) ( 207 )   
References | RelatedCitation | Metrics
In order to ensure the security and reliability of image in the process of information transmission,ordinary scrambling-diffusion encryption algorithm can not meet the security and efficiency problems nowadays.In this paper,the plaintext is transformed into hash value as the initial value of chaos,and four chaotic sequences are generated by fractional order Chen hyperchaos.Firstly,three-dimensional Arnold mapping is used for bidirectional parametric scrambling,and then hyperchaotic S-box is designed for substitution.Finally,chaotic cellular automata is used to circulate and diffuse,thus achieving a complete encryption process combining scrambling,substitution and diffusion (DSD).The algorithm has large key space,high key sensitivity,uniform statistical histogram of ciphertext,low correlation between adjacent pixels of ciphertext,high security and strong resistance to differential attack,and information entropy is close to ideal value.The algorithm can achieve a high level of security without multiple iterations,and the encryption security and efficiency are significantly improved.
Interdiscipline & Application
Software Quality Evaluation Based on Neural Network:A Systematic Literature Review
ZONG Peng-yang, WANG Yi-chen
Computer Science. 2019, 46 (11A): 507-516. 
Abstract PDF(3211KB) ( 491 )   
References | RelatedCitation | Metrics
Software quality is a significant factor throughout the software life cycle.With the rapid development of software industry,users have higher and higher requirements on software quality.Therefore,how to establish a more accurate software quality evaluation model has become an hot topic in the field of software quality research.The software quality evaluation model aims to find the relationship between the characteristics of various aspects of software and software quality from historical data.And neural network becomes one of the most appropriate methods to establish such a complex relationship because of its powerful learning ability and non-linear mapping ability.Using the method of systematic literature review,this paper summarized 50 domestic and foreign literatures on software quality evaluation using neural network method from 1994 to 2018 from the aspects of inputs,evaluation targets,modeling methods and the training of neural network.Some rules,unsolved problems and possible research directions of using neural network method to evaluate software quality were found.
Source Code Memory Leak Static Detection Based on Complex Control Flow
JI Xiu-juan, SUN Xiao-hui, XU Jing
Computer Science. 2019, 46 (11A): 517-523. 
Abstract PDF(2024KB) ( 166 )   
References | RelatedCitation | Metrics
C/C++ source code has a lot of memory leaks because of its manual allocation of heap memory space.Regarding complex control flow with multiple branches in the control flow graph,it is more difficult to detect memory leaks because of the uncertainty of memory allocating and releasing.A memory leak classification method was defined based on path abstraction in complex control flow.A projection-based model detection based on the analysis algorithm was proposed,where the original control flow graph is projected,thus simplifying and regulating the control flow graph.Meanwhile,in the inter-procedural analysis,by combining Cloning Expands the ICFG approach and Expanded Super-graph approach,a Inter-procedural Memory def-use Control Flow Graph(IMCFG) was built.At last,this algorithm is proved to be effective and precise by experiments.
Parallel Algorithm Design for Assisted Diagnosis of Prostate Cancer
SU Qing-hua, FU Jing-chao, GU Han, ZHANG Shan-shan, LI Yi-fei, JIANG Fang-zhou, BAI Han-lin, ZHAO Di
Computer Science. 2019, 46 (11A): 524-527. 
Abstract PDF(2022KB) ( 146 )   
References | RelatedCitation | Metrics
In the contemporary era of high cancer,prostate cancer is a unique disease for men,and the incidence is increasing year by year.Convolutional neural networks have attracted much attention due to their powerful performance in the field of image recognition,and are also very suitable for computer-aided diagnosis.Training a convolutional neural network is time consuming because neural network models often contain a large number of parameters.How to accele-rate the training of neural networks has become a very important issue in the field of deep learning.To solve this problem,a multi-GPU parallel scheme is generally adopted.Among them,data synchronization performs better when the GPU performance is balanced.Therefore,this paper draws on the algorithm based on data parallel to accelerate the three-dimensional convolution network of prostate.
Parallel Design and Optimization of GRAPES_CUACE On-line Coupled Air Quality Mode
YE Yue-jin, CHEN De-xun, HU Jiang-kai, MA Xin, ZHANG Xiao-ye
Computer Science. 2019, 46 (11A): 528-534. 
Abstract PDF(2737KB) ( 227 )   
References | RelatedCitation | Metrics
This article mainly introduced the research and analysis of the parallel optimization algorithm of the meteorological particulate_meso dust aerosol coupling model under different versions of the x86 architecture.Drawing on the current mainstream parallel optimization design methods at home and abroad,combined with the GRAPES_MESO system’s own program architecture and parallel framework,corresponding parallelization transformation was implemented for different versions of x86 architecture.Using the gprof tool and poke pile timing,the test hotspot module has three main parts:IO,communication and physical process.The main optimization methods for the IO module are:1)continuous reading and writing by discrete reading and writing;2)opening buffer from sparse memory access to continuous memory access;3)asynchronous IO.The following methods are adopted for the communication part:1)the fine-grained communication is changed from fine-grained to coarse-grained;2)the aggregate communication with lower time complexity is adopted.Analysis of optimization results for IO and communication modules show that the time-consuming ratio of IO module optimization decreased from 43.7% to 1.41%.The proportion is greatly reduced,and the optimal performance is improved by 317 times.Therefore,the method described in this paper greatly improves the operating efficiency of the IO module.In addition,the main optimization methods used to optimize the physical process are as follows:1)the multi-layer cyclic calculation process is changed from discrete to continuous;2)the communication mechanism is cyclically shifted;3)the data is reused to reduce computational redundancy;4)the stack variable space is reduced.The computational performance is increased by 22%,which further improves the parallel efficiency of the program and the strong scalability of the model.
Synchronization of a Certain Family of Automata and Consumption Function Analysis
CHEN Xue-ping, HE Yong, XIAO Fen-fang
Computer Science. 2019, 46 (11A): 535-538. 
Abstract PDF(1607KB) ( 145 )   
References | RelatedCitation | Metrics
Let n be an integer greater than 1.After introducing the automaton Cn,i for each integer i<n,the synchronizing ones in the family {Cn,i|0≤i≤n} of automata as well as their shortest synchronizing words are determined.Moreover,in aids of the so called transition consumption functions of automata and the weighted average consumptions of words,the advantages of such synchronizing automata in some typical applications are analyzed.
Dynamics and Complexity Analysis of Fractional-order Unified Chaotic System
YAN Bo, HE Shao-bo
Computer Science. 2019, 46 (11A): 539-543. 
Abstract PDF(2825KB) ( 243 )   
References | RelatedCitation | Metrics
Based on Adomian decomposition method (ADM),Lyapunov exponent spectrum,bifurcation diagram and attractor diagram,dynamics of the fractional-order unified chaotic system and the low of system state changing with its parameter and derivative order were analyzed,and the route from period state to chaos were observed.Moreover,complexity of fractional-order unified chaotic system was analyzed by means of C0 algorithm and Sample entropy algorithm.Through comparative analysis with the maximum Lyapunov exponent spectrum,it shows that the complexity analysis results are well consistent with the results of the maximum Lyapunov exponent spectrum dynamics in reflecting the dynamics of the fractional-order unified chaotic system,and the results of C0 algorithm is better than the results of Sample entropy algorithm.Finally,a pseudo random sequence generator was designed based on the fractional order unified chao-tic system.The test results show that it can pass all NIST tests,which laid an experimental basis for the practical applications of the fractional order unified chaotic system.
Inter-merchant Account Management Model Based on Blockchain
LI Wei, WANG Teng-yu, LIU Qian-long, LIU Ke-meng, FAN Yong-gang
Computer Science. 2019, 46 (11A): 544-547. 
Abstract PDF(3334KB) ( 231 )   
References | RelatedCitation | Metrics
With the continuous development of the internet economy,more and more merchants choose to use internet terminal for account management.However,there are a series of problems such as loss of account books,tampering with data,and crisis of trust between merchants due to human factors.By sorting out the common accounting management problems in the current society,distributed storage,traceability andnon-defor mable of data,which are precisely the main features of blockchain,become the keys to solve the problems of account management.In view of the importance of the account books on transaction partners and the high degree of fit between blockchain technology and account book management,this paper proposed an inter-merchant account management model based on blockchain technology.Firstly,based on the introduction of blockchain characteristics and link between blockchain and ledger management,a structure of ledger management model is explained.Secondly,the design of transaction text format,block,smart contract and consensus are analyzed.Finally,the model is analyzed for safety and performance,and the shortcomings of model performance are analyzed.It is proved that the account book management model based on blockchain technology meets the security and performance requirements of account book management between merchants.This model provides new ideas and methods for establishing a safe and reliable transaction account management.
Research on Decentralized Transaction Consensus Mechanism of Cloud Computing Resources Based on Block Chain
LIANG He-jun, HAN Jing-ti
Computer Science. 2019, 46 (11A): 548-552. 
Abstract PDF(2155KB) ( 630 )   
References | RelatedCitation | Metrics
The technical characteristics of de-centered,de-trust,data complete and traceable of block chain provide brand-new opportunity and challenge for cloud computing.Users get computing,storage,database and other resources from the traditional centralized data center through network.There are many problems under this model,such as high operating cost,low efficiency,unsafe data storage and so on.In this paper,a de-centered cloud computing trading mechanism and method was proposed to build a cloud computing resource trading market based on block chain technology,focusing on the application of consensus mechanism in the central cloud computing trading market.Through the analysis and comparison of the popular block chain consensus algorithms (Pos,PoW,DPoS,PBFT),this paper proposed the improved practical byzantine fault-tolerant algorithm (PBFT),which is employed to optimize the disadvantages of resource waste and trust loss when the etheric square is applied to the alliance chain,so as to reduce the cost,and apply the improved algorithm to the central trading market of cloud computing resources.This paper proposed to introducte block chain technology in cloud computing resource trading,and collectively maintain a reliable distributed database by means of de-centered and de-trust.The design of a consensus mechanism based on Ethernet can build a global Internet computing power trading platform,thus realizing the elastic scalability and on-demand allocation of cloud computing resources.
Study on Trustworthy Backtracking Mechanism of Experimental Teaching Fund Based on Blockchain
QU Guang-qiang, SUN Bin
Computer Science. 2019, 46 (11A): 553-556. 
Abstract PDF(2026KB) ( 191 )   
References | RelatedCitation | Metrics
This paper proposed a solution for experimental teaching fund system based on blockchain technology,it is divided into a core data network and an information disclosure network.The core data network is a distributed database composed of a series of core nodes with equal power,which is mainly responsible for data storage and input.The information disclosure network is an open network where anyone can read the complete data stored in the core data network and supervise the experimental related information,but does not have write permission.Experimental results show that decentralized algorithm and data can add a better monitoring mechanism to the traditional methods based on moral education.
Accross Block Chain Consensus Transation Model Based on Cluster Center
ZHAO Tao, ZHANG Ling-hao, ZHAO Qi-gang, WANG Hong-jun
Computer Science. 2019, 46 (11A): 557-561. 
Abstract PDF(3570KB) ( 363 )   
References | RelatedCitation | Metrics
The characteristics of blockchain,such as decentralization,anonymity and tampering,have made it exert profound influence on finance and other fields.However,at present,there are still many problems in the block chain system,such as low computing efficiency,limited capacity per unit time,and compatibility and interoperability between different block systems.In view of the above three problems,an efficient blockchain consensus and exchange system was proposed.Under the premise of ensuring the openness,security and tamper ability of the blockchain system and other blockchain features,the system realizes the rapid confirmation of the blockchain network for transactions issued by users,and effectively prevents the problem of excessive single-time synchronous block data in the same blockchain network.The system divide the nodes in the block chain into three types:consensus service nodes,cross-chain switching nodes and application nodes.The consensus service nodes with high computing power are connected together through high-speed network to form a blockchain P2P network serving a certain business field,and provide consensus computing services for application nodes in the blockchain network.Cross-chain switching nodes are connected to different blockchain networks at the same time,and a blockchain switching network is formed between switching nodes based on P2P protocol,providing cross-chain access services for application nodes of different blockchain networks.Application nodes could synchronize data from their respective blockchain network consensus service nodes,access cross-chain switching nodes,and send in-chain or cross-chain transactions.The experimental results show that the blockchain service network built by this system can greatly improve the consensus computing efficiency and increase the volume of transactions per unit time.
Application of Open MP and Ring Buffer Technology in Defects Detection of Glass Substrate
HU Hai-bing, XU Ting, ZHANG Bo, XU Dong-jian, JIN Shi-qun, LU Rong-sheng
Computer Science. 2019, 46 (11A): 562-566. 
Abstract PDF(2884KB) ( 152 )   
References | RelatedCitation | Metrics
In the process of defect detection of TFT-LCD glass substrates,in order to solve the problems of large data flow,complex data processing flow and high requirement of timing of data input and output,a multi-threaded parallel processing method by using ring buffer and Open MP was proposed.This method uses Open MP technology to realize multi-core parallel processing of complex processing,so as to make full use of the resources of multi-core processors and improve the ability of data processing.At the same time,in the process of defect data input,data processing and data output,multi-threaded parallel processing and real-time stable output can be realized by ring buffer technology.This me-thod was applied to the real-time defect detection system,and the processing speed of the system is increased by 2 to 3 times,the time error of data output is reduced by 70% to 80%,which fully demonstrates the practicability and effectiveness of this method.
Design and Implementation ofHandheld Data Acquisition Terminal
HUANG Guo-rui, GUO Kang, WANG Shi-gui, JIANG Jin-bo
Computer Science. 2019, 46 (11A): 567-569. 
Abstract PDF(3494KB) ( 169 )   
References | RelatedCitation | Metrics
At present,data acquisition in the field environment mainly relies on the data collector to fill out the preset form manually,and then submit it to the data entry staff to input into the computer database.This method of data acquisition involvesmulti-person work,the process is complex,the data are easy to lose,and the quality of data is difficult to guarantee.To solve these problems,this paper designed and implemented a handheld terminal based on embedded microprocessor PXA270.The terminal integrates wired and wireless local area network,GPRS/GSM wide area network and other means of communication in hardware,integrates various encryption techniques in software,and compiles encrypted communication software within wide area network on WinCE platform,which can effectively solve data real-time transmission and security issues under field conditions.
Human-machine Interaction System with Vision-based Gesture Recognition
SONG Yi-fan, ZHANG Peng, LIU Li-bo
Computer Science. 2019, 46 (11A): 570-574. 
Abstract PDF(2140KB) ( 1051 )   
References | RelatedCitation | Metrics
Human-machine interaction system is the connection between human and computer.As the advances in computer technology,uses of mouse or keyboard are insufficient,now people need some nature and comfortable methods to manipulate the computers and so on.Gesture recognition-based method is one of the important human-machine interaction system,but there are some issues in the traditional method,such as low prediction accuracy,complex process procedure.To solve these problems,this paper proposed a deep learning-based gesture recognition algorithm.This method abstracts gesture feature heat map by pose estimation,then classified them by using convolutional neural network,overcomes the difficulty of image segmentation in complex background,and improves the accuracy in recognition.The result shows a 98% accuracy in recognition.Finally,a gesture guessing game robot was designed with this method,and the application of gesture recognition in human-machine interaction was presented.
Design of Typical Quadrotor UAV Based on System Architecture
WU Zhong-zhi, WANG Lei, MA Jian-ping, TAN Si-yang, GUO Man-yi
Computer Science. 2019, 46 (11A): 575-579. 
Abstract PDF(3346KB) ( 353 )   
References | RelatedCitation | Metrics
With the deep integration of industrialization and informationization,computer modeling and simulation technology has been widely used in the development process of systems and products.However,there are problems of non-standard modeling and chaotic models.Based on the understanding of system architecture,this paper proposed a method of system design based on system architecture.Considering the key requirements of typical UAV,the preliminary design of the system is carried out.The subsystem models are built based on the UAV system architecture.The UAV system model is synthesized and simulated.It realizes the standard modeling process.At the same time,taking the power system as the research object,the optimal matching of battery capacity and battery weight,the optimal matching of motor and propeller can be quickly measured based on the system architecture,thus realizing the optimal design of the power system.This paper provided a new process and method for UAV design.
Design and Implementation of Crowdsourcing System for Still Image Activity Annotation
HOU Yu-chen, WU Wei
Computer Science. 2019, 46 (11A): 580-583. 
Abstract PDF(2756KB) ( 215 )   
References | RelatedCitation | Metrics
According to the problem of lack of the annotation data in still image activity recognition research,under the Android platform,a visual activity manual annotation system based on still images was designed and developed with the idea of “crowd-sourcing”.The system mainly includes the functions of assigning annotation tasks,annotating image information,reviewing annotation information and examining historical annotation information.For the annotation information with high evaluation scores,the web crawler technology is used to extract the auxiliary text labels of the image,and the annotation information is converted into word embedding storage for the convenience of later experimental research.Meanwhile,the system applies a task assignment algorithm based on pricing mechanism,which effectively improves the efficiency of user image annotation.The application of the actual deployment system shows that the operation is simple and smooth,the function modules of each algorithm are stable and efficient,and the advantages of mobile terminals are fully utilized to collect and organize the data of image activity annotation.
Crack Detection of Concrete Pavement Based on Convolutional Neural Network
WANG Li-ping, GAO Rui-zhen, ZHANG Jing-jun, WANG Er-cheng
Computer Science. 2019, 46 (11A): 584-589. 
Abstract PDF(3065KB) ( 827 )   
References | RelatedCitation | Metrics
In concrete road pavements,the presence of cracks often leads to major engineering and economic problems.At present,when computer vision technology is used to conduct crack detection,artificial predesigned feature extractor is needed to extract image features for classification,resulting in poor generalization ability and classification perfor-mance.In this paper,a crack detection method based on convolutional neural network was proposed to realize the automatic detection and classification of pavement defects and improve the efficiency and accuracy of pavement crack detection.Firstly,the crack convolutional neural network of concrete pavement is designed.The model is based on AlexNet network architecture,and the model is optimized from two aspects:network structure level and hyperparameter.Secondly,the camera collects the concrete pavement image to obtain the learning data.According to the data set size and the image color factor,10000 and 20000 gray maps and four data sets of the color RGB map are respectively created.Then,the created four datasets are used.The data set trains the designed concrete crack convolutional neural network to create a crack detection model and compare it to the original AlexNet model.Finally,the two models are compared by factors such as dataset size,image color factor,network structure and hyperparameters.The experimental results show that by increasing the data set,using the color RGB map,adjusting the network structure and hyperparameters,the proposed model is helpful to improve the classification detection accuracy.Compared with the original AlexNet network model,the network model identification accuracy is high,and the recognition accuracy of color image samples is up to 98.5%.At the same time,the image gray level preprocessing is avoided and the efficiency of crack detection is improved.
Prediction of Air-conditioning Load in Metro Station Hall Based on BP Neutral Network
LI Ting-ting, BI Hai-quan, WANG Hong-lin, WANG Xiao-liang, ZHOU Yuan-long
Computer Science. 2019, 46 (11A): 590-594. 
Abstract PDF(2947KB) ( 338 )   
References | RelatedCitation | Metrics
The central air conditioning system is the emphases energy-consuming equipment in the station building of urban rail transit system.In the initial operational stage,there are many reasons,such as the load of station air conditioning is far less than the designed load,the lack of real-time load value and the inability to dynamically adjust accor-ding to the actual load of the building,which lead to the current energy consumption.In this paper,the air conditioning system in public area of metro station hall is taken as the research object.On the basis of the air conditioning load calculation method,load calculation model is established based on TRNSYS system simulation platform.Applying the orthogonal test method to design the test scheme and the simulation method to study the factors that have significant influence on the air conditioning energy consumption of the subway station hall.A load forecasting model for air conditioning system is established based on the significance orders of factors and BP neural network theory.The objective function is to minimize the error between the predicted load and the actual load,and the model is trained by using simulation experimental data as training samples.The training process was relatively stable and there was no obvious shock(R2=0.99956).The variance coefficient of root mean square error between predicted load and simulated load is small(3.6%).The maximum relative errors of the model are 9.8257% and 11.675% respectively when the passenger flow and weather change.The validation results indicate that the model has high prediction accuracy and preferably generalization ability,which is an effective method for air conditioning load forecasting in public area of metro station hall,and can provide basis for air conditioning control system of Metro station.
Comparison of Balancing Methods in Internet Finance Overdue Recognition:Taking PPDai.com As Case
LIU Hua-ling, LIN Bei, YUN Wen-jing, DING Yu-jie
Computer Science. 2019, 46 (11A): 595-598. 
Abstract PDF(2818KB) ( 252 )   
References | RelatedCitation | Metrics
The rapid development of Internet finance makes the P2P network loan as an innovative financing method for SMEs and individuals,therefore,how to identify the potential risks becomes a hot issue.However,due to the existence of serious imbalance between the overdue and non-overdue samples,the overdue recognition rate is low.To solve this problem,the paper used random undersampling,SMOTE and Bagging to pre-process the data,and then compared the result by using Logistic Regression (LR) and Support Vector Classification Machine (SVC).The empirical results show that the balancing effect of Bagging is better than random undersampling and SMOTE in P2P overdue loan recognition.In addition,LR is more suitable for P2P overdue loan recognition than SVC for not existing obvious over-fitting.
Study on Optimized Method for Predicting Paraffin Deposition of Pumping Wells Based on SCRF
WANG Li-jun, ZHI Zhi-ying, JIA Lu, LI Wei
Computer Science. 2019, 46 (11A): 599-603. 
Abstract PDF(2963KB) ( 175 )   
References | RelatedCitation | Metrics
In the production process of oil field,paraffin deposition is easy to occur for oil wells affected by various factors.Paraffin deposition usually causes blockage of oil wells,and even causes well stuck or overload burning of electric motors,which will greatly reduce oil well production and increase the cost of oil production.So predicting the paraffin deposition state of pumping wells in advance and realizing predictive maintenance for pumping wells equipment,can reduce the cost and increase efficiency for oil fields,which have great significance on intelligent management.In order to improve the accuracy of paraffin deposition prediction based on unbalanced data set for pumping wells,this paper proposed an integrated learning method named SCRF for unbalanced data.Firstly,SMOTE method is used to oversample a few classes in the original data set to increase the number of minority classes and reduce the unbalanced proportion.Then CLUSTER clustering method is used to stratify and undersample the new data set to generate the training data set.Finally,random forest algorithm based on bagging technology is used to integrate the training data set,so as to ge-nerate the prediction model.The experimental results show that the prediction effect of the model is better after sample equalization,whilethe prediction efficiency and accuracy are improved to a certain extent.
Study on Real-time Smoke Simulation Algorithm Based on Programmable GPU
DENG Ding-sheng
Computer Science. 2019, 46 (11A): 604-608. 
Abstract PDF(1881KB) ( 291 )   
References | RelatedCitation | Metrics
With the increasing of the domestic economy and the progress of science and technology,people’s quality of life and standard of living have also been gradually improved,while spiritual and cultural demand is more and more strong,so higher requirements are required for animation,film and television and related aspects.Therefore,a computer real-time algorithm based on programmable GPU that can control smoke simulation effectively is born,and this study is based on this.Through the experimental simulation in real time,smoke simulation algorithm of programmable GPU can be implemented in real-time status on the basis of predetermined smoke,and can achieve the target of shape and state transform through fairly natural smoke flow.Through the programmable GPU real-time smoke simulation algorithms of correlation analysis and research,certain reference significance and guiding role can be provided for the further development of related technologies and related industry in China.