Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 47 Issue 11A, 16 November 2020
  
Artificial Intelligence
New Development Direction of Artificial Intelligence-Human Cyber Physical Ternary Fusion Intelligence
WANG Hai-tao, SONG Li-hua, XIANG Ting-ting, LIU Li-jun
Computer Science. 2020, 47 (11A): 1-5.  doi:10.11896/jsjkx.200100053
Abstract PDF(1820KB) ( 4290 )   
References | Related Articles | Metrics
With the rapid development and wide application of artificial intelligence,informatics,sociology,physics and philosophy subjects are converging rapidly,which gives birth to a novel research field-human cyber physical intelligence (HCPI).HCPI is also known as ternary fusion intelligence.It reflects the organic integration of physical space,information space and social space,and is one of the important directions and cutting-edge topics for the future development of artificial intelligence.Aiming at this new research hotspot of artificial intelligence,starting with the origin of the ternary fusion intelligence,the basic concepts,related research contents and application development of the ternary fusion intelligence are summarized.Firstly,the backgrounds and definitions of ternary fusion intelligence are introduced.Then,the interaction patterns and typical characteristics of ternary fusion intelligence are explained.Next,the relationship model and system model of ternary fusion intelligence are elaborated.On this basis,the near-term realization goals and technical approaches of ternary fusion intelligence are discussed.Finally,the research status,typical applications and development trend of ternary fusion intelligence are summed up systematically.
Study on Intelligent Scheduling System of Composite Shop
ZHANG Wei, YU Cheng-long
Computer Science. 2020, 47 (11A): 6-10.  doi:10.11896/jsjkx.191000147
Abstract PDF(3142KB) ( 1005 )   
References | Related Articles | Metrics
To solve the problem of insufficient real-time and practicability of experience-based production scheduling in aerospace composite workshop,an aerospace intelligent scheduling system is proposed,and the architecture and function composition of the system are discussed.Furthermore,the calculating method based on perceptual information is studied,and the prototype system of aerospace composite workshop is developed and verified.According to the production real-time data and the scheduling know-ledge,the system can automatically judge and optimize resource allocation,calculate man-hours of each process step,and rearrange and output the scheduling plan according to the production disturbances.It provides a basis for the formulation of the enterprise scheduling program and lays the foundation for the development of the subsequent engineering intelligent scheduling system.
Global Typhoon Message Collection Method Based on CNN-typhoon Model
HAN Rui, GU Chun-li, LI Zhe, WU Kang, GAO Feng, SHEN Wen-hai
Computer Science. 2020, 47 (11A): 11-17.  doi:10.11896/jsjkx.201000038
Abstract PDF(3374KB) ( 1108 )   
References | Related Articles | Metrics
Typhoon is a highly convective weather system with high impact.As the source of typhoon initial values,typhoon message data is helpful to improve the accuracy of typhoon forecasts.Therefore,it is very important to do a good job in the rapid identification and collection of global typhoon messages.Aiming at the problems of poor real-time performance,high latency,and passive message reception of global typhoon messages,this study uses MSG,Meteosat5,MTSAT,GOES-W,and GOES-E satellite 8 983 infrared satellite images of 1 351 typhoon processes from January 2006 to August 2020.Based on the deep learning algorithm,a CNN-typhoon model is proposed,which can identify and classify three types of images:no typhoon,typhoon generation,and strongest typhoon.Experiments have proved that the recognition accuracy of the CNN-typhoon model training set can be close to 100%,and the verification set accuracy is higher than 88.1%.At the same time,the model is substituted into the simulation service,and within a certain period of time,nearly 31.0% of the message collection types are increased,saving the message collection timeliness is improved by 23.5 times.
Named Entity Recognition in Field of Ancient Chinese Based on Lattice LSTM
CUI Dan-dan, LIU Xiu-lei, CHEN Ruo-yu, LIU Xu-hong, LI Zhen, QI Lin
Computer Science. 2020, 47 (11A): 18-23.  doi:10.11896/jsjkx.200500090
Abstract PDF(2170KB) ( 1394 )   
References | Related Articles | Metrics
Investigated the named entity recognition problem of ancient Chinese literature based on the Complete Collection of Four Treasuries dataset.Proposed an algorithm for named entity recognition of ancient Chinese literature based on the Lattice LSTM model.This method combines both character sequence information and word sequence information as input to the model.Using jiayan word segmentation tool,word2vec is used to train character and word level embedding of ancient Chinese as input to the Lattice LSTM model,which improves the performance of named entity recognition based on ancient Chineseliterature.Based on the Lattice LSTM model and pre-trained character and word level embedding of ancient Chinese,the performance of named entity recognition based on ancient Chinese literature is improved.Compared with the traditional Bi-LSTM-CRF model,its F1 score is improved by about 3.95%.
CNN_BiLSTM_Attention Hybrid Model for Text Classification
WU Han-yu, YAN Jiang, HUANG Shao-bin, LI Rong-sheng, JIANG Meng-qi
Computer Science. 2020, 47 (11A): 24-27.  doi:10.11896/jsjkx.200400116
Abstract PDF(2011KB) ( 1586 )   
References | Related Articles | Metrics
Text classification is the basis of many natural language processing tasks.Convolutional neural network (CNN) can be used to extract the phrase level features of text,but it can't capture the structure information of text well;Recurrent neural network (RNN) can extract the global structure information of text,but its ability to capture the key pattern information is insufficient.Attention mechanism can learn the distribution of different words or phrases to the overall semantics of text,key words or phrases will be assigned higher weights,but it is not sensitive to global structure information.In addition,most of the existing models only consider word level information,but ignore phrase level information.In view of the problems in the above models,this paper proposes a hybrid model which integrates CNN,RNN and attention.The model considers the key pattern information and global structure information of different levels at the same time,and fuses them to get the final text representation.Finally,the text representation is input to the softmax layer for classification.Experiments on multiple text classification datasets show that the model can achieve higher accuracy than the existing models.In addition,the effects of different components on the performance of the model are analyzed through experiments.
Context-based Emotional Word Vector Hybrid Model
HUO Dan, ZHANG Sheng-jie, WAN Lu-jun
Computer Science. 2020, 47 (11A): 28-34.  doi:10.11896/jsjkx.191100114
Abstract PDF(2402KB) ( 1077 )   
References | Related Articles | Metrics
Most of the existing learning methods based on word vectors can only model the syntactic context of words,but ignore the emotional information of words.This paper proposes a context-based training model of emotional word vectors,and uses a rela-tively simple method to construct a learning framework of emotional word vectors.A fusion method is proposed to obtain the emotion information of the extended mixed model in the sentence polarity and the context-based word vectors.So as to solve the problem that words with similar contexts but opposite emotional polarity are mapped to adjacent word vectors.the adjacent words in the emotion vector space are semantically similar and have the same emotion polarity.In order to verify that the learned emotion word vector model can accurately contain the semantic information of emotion and context words,the emotion word vector is trained in different languages and data sets of different fields,and quantitative experiments are conducted at the word level.The results show that the classification effect of the proposed model is 14 percent higher than that of the traditional model.In the experiment of emotion classification at the word level,the accuracy is improved by 10 percentage points compared with the traditional word bag model.It also plays a guiding role for product providers to get useful information in a large number of user reviews.
Transcriptome Analysis Method Based on RNA-Seq
GUO Mao-zu, YANG Shuai, ZHAO Ling-ling
Computer Science. 2020, 47 (11A): 35-39.  doi:10.11896/jsjkx.200600057
Abstract PDF(1840KB) ( 1911 )   
References | Related Articles | Metrics
RNA-Seq technology has become an important method of transcriptome analysis because of its advantages of low cost,high precision and wide coverage.It provides new +means for the study of gene expression patterns,disease biomarker detection,crop stress resistance research and molecular breeding.However,the massive data generated by RNA-Seq also brings challenges to data analysis.How to effectively process and analyze RNA-Seq data has become a hot topic in bioinformatics research.The paper introduces the transcriptome analysis process based on RNA-Seq technology,including RNA-Seq data preprocessing,differential expression analysis and high-level analysis.RNA-Seq data preprocessing is to perform quality control and quantitative calculations on the original sequencing data,and differential expression analysis is to screen genes,usually based on statistics or machine learning.High-level analysis is to further process the differential genes and determine gene function and regulatory network through enrichment analysis and other means.Finally,the development prospects of RNA-Seq-based transcriptome analysis me-thods are discussed.
Essential Protein Identification Method Based on Structural Holes and Fusion of Multiple Data Sources
YANG Zhuang, LIU Pei-qiang, FEI Zhao-jie, LIU Chang
Computer Science. 2020, 47 (11A): 40-45.  doi:10.11896/jsjkx.200200004
Abstract PDF(3232KB) ( 737 )   
References | Related Articles | Metrics
Essential protein identification is a hot research topic which is difficult in the field of computational biology.The exis-ting methods for identifying essential proteins by computational methods are mainly DC,BC,LAC,PeC,ION,and LIDC,yet the identification accuracy needs to be further improved,mainly because only one data source is used which is protein interaction network,and there are many false positive and false negative data in the network.In order to improve the identification accuracy,an efficient essential protein identification method PSHC is proposed.Firstly,the PSHC method introduced the structure hole theory into the essential protein identification method for the first time.Secondly,the PSHC method combines two data sources of protein interaction network and protein complex to identify the essential proteins.Experimental results on real data show that PSHC can identify more essential proteins than other traditional methods,and statistical indicators such as sensitivity,specificity,accuracy,positive predictive value,negative predictive value,and F-measure are also higher than other methods.
Analysis of Emotional Degree of Poetry Reading Based on WDOUDT
DONG Ben-qing, LI Feng-kun
Computer Science. 2020, 47 (11A): 46-51.  doi:10.11896/jsjkx.200600055
Abstract PDF(1908KB) ( 834 )   
References | Related Articles | Metrics
In this paper,a new unbalanced decision tree algorithm for infectious expressions of reading poem is proposed.This algorithm called Weighted Division of Unbalanced Decision Tree (WDOUDT).Through the study on the index of poetry reading appeal,mel-frequency cepstral coefficients are extracted from the reading audio,and the decision tree method with the strongest interpretability is used for modelling.WDOUDT does not use evolutionary algorithm and heuristic information search,it is applied to the emotional scoring of poetry reading audio,and the time complexity is lower than the traditional decision tree.The proposed algorithm has fewer nodes and better generalization ability,and has better robustness to noise data.
Study on Information Extraction of Power Grid Fault Emergency Pre-plans Based on Deep Learning
SHI He, YANG Qun, LIU Shao-han, LI Wei
Computer Science. 2020, 47 (11A): 52-56.  doi:10.11896/jsjkx.191100210
Abstract PDF(2063KB) ( 955 )   
References | Related Articles | Metrics
The emergency pre-plans are saved by the power grid dispatching department.The power grid dispatching department has formulated it based on the power grid operation and maintenance experiences,which can assist dispatchers to deal with the emergency fault.When an emergency fault occurs in the power grid,in order to effectively deal with the emergency pre-plans,so that dispatchers can learn from previous experience to handle emergencies,it's necessary to extract the key information of the pre-plan.So in this way they can quickly retrieve and match similar accidents in the per-plans.However,there are many problems of the traditional power grid fault emergency pre-plans processing method.The traditional processing method is not versatile and scalable,and it is unable to effectively digitize the power grid fault emergency pre-plans.This leads to a limited scope of application of the traditional method.In this paper,the deep learning method is used to make up for the shortcomings of traditional processing methods.It is used to analyze the syntax of the sentences for the dispatch emergency pre-plans.Then it will generate a syntactic parser and the parser will give the syntactic parse tree of the sentences.After that,the system state information and disposal points for the emergency pre-plans are extracted,and then the unstructured text information of the emergency pre-plans is transformed into the structured data.Using the deep learning method,the power grid emergency pre-plans can be effectively ma-naged,and the dispatcher can determine the fault type and carry out the operation quickly.This paper also carries out experimental verification for the method.It can be concluded that the method can improve the efficiency of fault processing,while it also has the advantages of good versatility and strong expansibility,and can achieve continuous improvement of the model.
System Fault Diagnosis Method Based on Mahalanobis Distance Metric
LIN Yi, JI Hong-jiang, HAN Jia-jia, ZHANG De-ping
Computer Science. 2020, 47 (11A): 57-63.  doi:10.11896/jsjkx.190900174
Abstract PDF(2855KB) ( 994 )   
References | Related Articles | Metrics
In view of the multi-index related problems in previous fault diagnosis methods and the shortcomings of the calculated complication and low efficiency when considering multiple integrals,a system fault diagnosis method based on Mahalanobis Distance (MD) metrics is proposed to improve these problems.For the system performance state data monitored on a certain device,the proposed method is to calculate the MD area metric method to compare the distribution of the Mahalanobis Distances of the known data samples with the distribution of the Mahalanobis Distances of the observed data samples.Specifically,the MD method is firstly used to convert multivariate data into univariate data,and the correlation between multivariable is eliminated,and the complexity and uncertainty of multivariate joint distribution using multiple integrals are avoided.Then the area metric is used to compare the difference between the cumulative distribution functions of the univariate data,and the area value between the distribution curves is calculated according to the definite integral,and the smaller area value is the category of the sample fault.By comparing with common fault diagnosis methods (BP neural network and Naïve Bayes),it shows that the proposed method is simple and effective,the fault diagnosis rate is high,and the calculation cost is greatly reduced,and the system fault diagnosis efficiency is improved.
PCANet-based Multi-factor Stock Selection Model for Value Growth
ZHANG Ning, SHI Hong-wei, ZHENG Lang, SHAN Zi-hao, WU Hao-xiang
Computer Science. 2020, 47 (11A): 64-67.  doi:10.11896/jsjkx.200300086
Abstract PDF(1925KB) ( 1761 )   
References | Related Articles | Metrics
As an important part of the quantitative investment program,the quantitative multi-factor stock selection model is used to predict stock returns by modeling historical financial data.This model has introduced many machine learning methods including deep learning.For the first time,the application of PCANet in quantitative stock selection has been explored.By transforming factors from financial time series data to two-dimensional image data,the financial time series prediction problem is transformed into an image classification problem,which provides a new and more open perspective.The research object is the Shanghai and Shen zhen 300 stocks from January 1,2009 to June 6,2017,which will be used for PCANet training and prediction.In the two-year backtest results,it obtains a Sharpe ratio of 57.17%,an excess return of 16.84%,and a maximum drawdown of -18.14%.Compared with the CNN model and the linear regression model,a higher Alpha return and Sharpe ratio are obtained,and the maximum retracement is smaller than that of the linear regression model.This shows that using PCANet for multi-factor stock selection is a feasible method.The application of PCANet in the multi-factor stock selection model can not only maintain the feature extraction capability of the deep learning structure,but also can effectively extract the features of the factor compared to linear regression.It will be a new direction worth trying.
Bat Optimization Algorithm Based on Cosine Control Factor and Iterative Local Search
ZHENG Hao, YU Jun-yang, WEI Shang-fei
Computer Science. 2020, 47 (11A): 68-72.  doi:10.11896/jsjkx.200200063
Abstract PDF(2033KB) ( 763 )   
References | Related Articles | Metrics
To solve the problem that bat algorithm is easy to fall into local optimal solution when solving high-dimensional complex problems,an improved bat algorithm is proposed in this paper.Firstly,the nonlinear inertia weight controlled by cosine factor is added to the bat velocity formula to dynamically adjust the balance between global search and local search,so as to improve the accuracy and stability of the algorithm.Secondly,at the end of each iteration,the concept of iterated local search is introduced to perturb the local optimal solution to obtain the intermediate state,and then re-search the intermediate state to get the global optimal solution,which can make it jump out of the local optimal solution quickly.Finally,the simulation results on 12 complex benchmark functions with other algorithms show that the improved algorithm solves the problems of low precision,easy to fall into local extremum and unstable solution.
Keyword Extraction Based on Multi-feature Fusion
DUAN Jian-yong, YOU Shi-xin, ZHANG Mei, WANG Hao
Computer Science. 2020, 47 (11A): 73-77.  doi:10.11896/jsjkx.200300121
Abstract PDF(2107KB) ( 865 )   
References | Related Articles | Metrics
With the development of the Internet,webpage data,new media text and other data are increasing,the efficiency of information retrieval based on full text is not enough to support the retrieval of massive data,so the keyword extraction technology is widely used in search engines (such as Baidu search) and new media services (such as news retrieval).The fusion model is a model that uses the BiLSTM-CRF structure and fuses multiple manual features,which can more effectively complete the task of keyword extraction.Based on the features of words embedding,the fusion model incorporates the features of part of speech,word frequency,word length and word position.Themultidimensional feature information can help the model to extract deep keyword feature information more comprehensively.The fusion model combines the features of deep learning,such as wide coverage and high learning ability,with the ability of accurate expression of manual features to further improve the feature mining ability and shorten the training time.In addition,a labeling method called LMRSN is adopted in this modelto extract key phrases moreeffec-tively.Experimental results show that the fusion model achieves F1 score of 62.08 in comparison with the traditional model,and its performance is much better than that of the traditional model.
Sentiment Classification of Network Reviews Combining Extended Dictionary and Self-supervised Learning
JING Li, LI Man-man, HE Ting-ting
Computer Science. 2020, 47 (11A): 78-82.  doi:10.11896/jsjkx.200400061
Abstract PDF(1793KB) ( 1024 )   
References | Related Articles | Metrics
In the rapidly developing Internet era,sentiment analysis of online reviews plays an important role in analyzing public opinion and monitoring e-commerce.Existing classification methods mainly include sentiment dictionary methods and machine learning methods.The sentiment dictionary method relies too much on the sentiment words in the dictionary.The more complete the sentiment dictionary,the more pronounced the sentiment tendency of online comments and the better classification effect.The classification effect of comments is not good when the sentiment tendencies are not easy to distinguish.The machine learning method is a supervised method,and its classification effect relies on a large number of pre-annotated corpora.Currently,the corpus annotation is done manually,and the workload is extremely large.This paper combines characteristics of the two methods to build a new sentiment classification model of network reviews.First,the sentiment dictionary is expanded based on the domain of online reviews,andthe sentiment value of each online comment is calculated according to the extended sentiment dictionary.According to the preset sentiment threshold,the comments with significant is sentiment tendencies and higher accuracy are selected as the definite set,and the rest that are not easily distinguished are used as uncertain sets.The classification result of the definite set is directly determined by the sentiment value.Second,according to the definite set from the sentiment dictionary method,a classifier is trained through self-supervised learning,and the training data do not require manual annotation.Finally,the trained classifier is used to classify the uncertain set again,and an improved algorithm is used to improve the classification result of the uncertain set.Experiments show that,compared with the sentiment dictionary method and the machine learning method,the proposed model achieves a better sentiment classification effect for the sentiment classification of hotel reviews and Jingdong reviews.
Complete Contradiction and Smallest Contradiction Based on Propositional Logic
TANG Lei-ming, BAI Mu-chen, HE Xing-xing, LI Xing-yu
Computer Science. 2020, 47 (11A): 83-85.  doi:10.11896/jsjkx.200400072
Abstract PDF(1577KB) ( 910 )   
References | Related Articles | Metrics
Resolution is a simple,reliable and complete reasoning rule in automatic reasoning.Contradiction is an important extension of the principle of resolution.Based on the deductive reasoning of the contradiction of the propositional logic,this paperstu-dies the nature of the contradiction,puts forward the concept of the complete contradiction and the smallest contradiction and gains related properties and theorems.The main contents of these properties and theorems are as follows:1)Characteristics of each special contradiction;2)Strategies for adding literals and non-extended changes of clauses when adding a clause to the complete contradiction;3)The law of the smallest contradiction in the complete contradiction when adding clause and related literals;4)Smallest contradiction can be extended to complete contradiction.These conclusions enable complete contradiction and Smallest contradiction to be converted to each other by adding new clauses or related literals.This property provides a certain theoretical support for the further application of deductive reasoning of contradiction to computer solving.
New Machine Translation Model Based on Logarithmic Position Representation and Self-attention
JI Ming-xuan, SONG Yu-rong
Computer Science. 2020, 47 (11A): 86-91.  doi:10.11896/jsjkx.200200003
Abstract PDF(2559KB) ( 840 )   
References | Related Articles | Metrics
In the task of machine translation,self-attention mechanism has attracted widespread attention due to its highly parallelizable computing ability,which significantly reduces the training time of the model,and its ability to effectively capture the semantic relevance between all words in the context.However,unlike recurrent neural networks,the efficiency of self-attention mechanism stems from ignoring the position information between the words of the context.In order to make the model utilize the position information between the words,the machine translation model called Transformer,which is based on self-attention mechanism,represents the absolute position information of the words with sine function and cosine function.Although this method can reflect the relative distance,it lacks directionality.Therefore,based on the logarithmic position representation and self-attention mechanism,a new model of machine translation is proposed.This model not only inherits the efficiency of self-attention mechanism,but also retains distance and directionality between words.The results show that the new modelcan significantly improve the accuracy of machine translation compared with the traditional self-attention mechanism model and other models.
Finite Basis of Implicational System Associated with Finite Models of Description Logic FL0 Under the Greatest Fixed Point Semantics
ZHENG Tian-jian, HOU Jin-hong, ZHANG Wei, WANG Ju
Computer Science. 2020, 47 (11A): 92-96.  doi:10.11896/jsjkx.200300188
Abstract PDF(1613KB) ( 706 )   
References | Related Articles | Metrics
Description logic and formal concept analysis are two different formalisms based on concept,each has its own advantages and disadvantages.Researchers begin to combine them together recently.In this paper,methods of formal concept analysis are introduced into research in description logic to analyze the finite basis of finite models of FL0 under greatest fixed point semantics.In formal concept analysis,there always exists the Duguenne-Guigues basis as long as the attribute set is finite.The finite model of cyclic FL0 terminology under greatest fixed point semantics is taken as description context,FL0 concept as attribute and implications are defined in this context.It is proved that there exists a finite basis of the finite model,which is also sound and completed.
Multi-document Automatic Summarization Based on Sparse Representation
QIAN Ling-long, WU Jiao, WANG Ren-feng, LU Hui-juan
Computer Science. 2020, 47 (11A): 97-105.  doi:10.11896/jsjkx.200300087
Abstract PDF(3195KB) ( 718 )   
References | Related Articles | Metrics
Automatic document summary is an important task in the field of natural language processing.Limited by the difficulty of accurately understanding the semantics of documents,most of the documents are sorted by artificial features,such as word frequency and keywords,to extract the abstract.Inspired by the theory of sparse representation,a dynamic semantic space partition algorithm based on sparse representation is proposed.The algorithm performs dictionary learning on the initially divided semantic subspace,uses the obtained dictionary to sparsely reconstruct the sentence vector.Dynamically adjusts it to the division which has the smallest reconstruction error.Iteratively realizes the re-division of the semantic space.For abstracting sentences in the divided semantic subspace,an automatic extraction algorithm based on sparse similarity ranking is proposed.All sentence vectors in each semantic subspace are viewed as dictionary atoms.Through sparse reconstruction,the sparse similarity can be obtained which reflects the degree of semantic representation of one sentences to others.The cumulative sparse similarity of each sentence to other sentences is used as a metric to measure the ability of the sentence to represent the spatial semantic information.Ranking the cumulative sparse similarity,and then extract the required top N sentences.The experimental results on the travel review data set of popular attractions on the TripAdvisor website show that the semantic space reconstruction error can be rapidly reduced after5 iterations,remain stable which shows the convergence.Except for effectively reduce the reconstruction error by nearly 17%,the algorithm is also not sensitive to data dimensions.The proposed summary avoids repeated abstraction of redundant and highly repetitive text,which is an effective multi-document automatic summarization method.
Variable Ordering Selection for Cylindrical Algebraic Decomposition Based on Hierarchical Neural Network
ZHU Zhang-peng, CHEN Chang-bo
Computer Science. 2020, 47 (11A): 106-110.  doi:10.11896/jsjkx.200100018
Abstract PDF(2385KB) ( 848 )   
References | Related Articles | Metrics
Cylindrical algebraic decomposition (CAD) is a widely used approach for computing the real solutions of polynomial systems.The choice of variable ordering has a significant impact on its computation time.Most of existing ordering selection algorithms are based on heuristic empirical algorithms,whose accuracy are not high.A few approaches based on machine learning use small data sets and are based on complex human characteristics.In this paper,on the basis of randomly generating a large set of polynomial systems,which are tagged with timings obtained by applying different orderings for computing CAD,a new kind of explicit representation feature and a new hierarchical neural network are proposed.Firstly,according to the computation time of CAD with the worst ordering,the data set is divided into four subsets with different computation difficulties,and the classification models are established respectively.Secondly,a regression model for predicting the longest computation time is built.Finally,the longest computation time is predicted according to the regression model,based on which a classification model with right computation difficulty is automatically selected to predict the optimal variable ordering.Experimental results show that the performance of explicit features is better than that of complex handcrafted features,and the performance of the optimal ordering predicted by hierarchical neural network on difficult problems is about two times better than that of an empirical algorithm.
Realtime Multi-obstacle Avoidance Algorithm Based on Dynamic System
WANG Wei-guang, YIN Jian, QIAN Xiang-li, ZHOU Zi-hang
Computer Science. 2020, 47 (11A): 111-115.  doi:10.11896/jsjkx.200800068
Abstract PDF(3275KB) ( 1157 )   
References | Related Articles | Metrics
With the increasing application of autonomous robot control,the risk of dynamic interference in the working area also increases.Aiming at the problem of real-time obstacle avoidance of robots in the workspace,this paper proposes a realtime multi-target avoidance algorithm based on dynamic system.Firstly,the modulation model of the dynamic system is constructed,then the modulation matrix is set up,then the obstacle avoidance path is constructed,and finally the multi-obstacle dynamic avoidance model is proposed.This algorithm no longer takes the prior analysis of obstacles as a prerequisite,but directly calculates the mo-dulation matrix according to the obstacles in the current scene,and uses the dynamic system modulation method to realize the impenetrability representation of obstacles without changing the equilibrium point of the dynamic system.In the simulation experiment,aiming at the problem of avoiding spatial attachment obstacles,the continuous modulation algorithm(CM) is used to compare and simulate with the proposed algorithm,and the effectiveness of the algorithm is verified.Finally,the simulation results show that the algorithm can effectively solve the problem of static multi-obstacle and dynamic multi-obstacle avoidance path planning.
Implementation of Financial Venture Capital Score Card Model Based on Logistic Regression
BIAN Yu-ning, LU Li-kun, LI Ye-li, ZENG Qing-tao, SUN Yan-xiong
Computer Science. 2020, 47 (11A): 116-118.  doi:10.11896/jsjkx.200400017
Abstract PDF(1676KB) ( 1330 )   
References | Related Articles | Metrics
This paper takes the problem of customer default in the current bank credit business as the starting point and maps the relationship between customer default rate and credit score card value reasonably.The logistic regression is used to build the prediction model of the score card and the gradient descent algorithm is used to construct the customer score card in the bank venture capital.The data is first loaded and analyzed,then the data set is partitioned and the cross-time validation set is used as the final validation of the model.Finally,KS value and AOC curve are used to evaluate the stability of the model.Experimental results show that the score card model constructed by the proposed method has good stability.
Hybrid Search Algorithm for Two Dimensional Guillotine Rectangular Strip Packing Problem
GUO Chao, WANG Lei, YIN Ai-hua
Computer Science. 2020, 47 (11A): 119-125.  doi:10.11896/jsjkx.200200016
Abstract PDF(1758KB) ( 1309 )   
References | Related Articles | Metrics
The guillotine rectangular strip packing problem is NP-hard.This problem has industrial applications such as glass cutting and integrated circuit layout design.These real world applications can be formulated as packing problems with the objective of maximize the usage ratio of the materials.The whole sketch is as follows.Firstly,a hybrid search algorithm is presented for solving the two dimensional guillotine rectangular packing problem (2D-GRPP),then the hybrid search algorithm is adopted for solving the two dimensional guillotine rectangular strip packing problem (2D-GRSPP) in the manner of jump search and binary search.According to the quasi-human approach,the basic definitions such as corner-occupying,action space,maximal height and rectangle combination are presented so as to induce the basic algorithm.Based on the basic algorithm,the hybrid search algorithm involves three phases.In the first phase,the initial solution is generated.In the second phase,the local search procedure runs to adjust the priority numbers of the rectangles.When the local search procedure encounters local optimal solutions,the off-trap strategy procedure is used to jump out of the trap and guide the search into the new areas.In the third phase,the beauty degree enumeration procedure is adopted to improve the selection of the corner-occupying actions.The hybrid search algorithm (called HS) is tested on two sets of 91 benchmark instances.The computational results show that the proposed algorithm generally outperforms the best heuristics (called SPTRS) in the literature up to now.The mean relative errors of HS and SPTRS are 3.83% and 4.26%,respectively.The HS algorithm is efficient for solving the problem.
Cross-media Knowledge Graph Construction for Electric Power Metering Based on Semantic Correlation
XIAO Yong, QIAN Bin, ZHOU Mi
Computer Science. 2020, 47 (11A): 126-131.  doi:10.11896/jsjkx.200300115
Abstract PDF(4132KB) ( 982 )   
References | Related Articles | Metrics
Facing the field of electric power metering,this paper proposes a cross-media knowledge graph construction method based on semantic correlation.There is a semantic gap between the low-level features of different types of media,which is difficult to directly associate.But different types of media describing the same entity have the same semantic tag information at the high-level semantics.That is the so-called semantic association.Based on the characteristics of knowledge in the field of electric power metering,this paper completes the cross-media knowledge graph through core steps such as semantic analysis and feature extraction,semantic association mining,and cross-media ontology construction.Experiment results show that the proposed method is effective and can support cross-media retrieval applications in the field of electric power metering.
Application on Damage Types Recognition in Civil Aeroengine Based on SVM Optimized by DMPSO
ZHENG Bo, MA Xin
Computer Science. 2020, 47 (11A): 132-138.  doi:10.11896/jsjkx.200600101
Abstract PDF(2380KB) ( 706 )   
References | Related Articles | Metrics
In order to recognize the damage types of aeroengine automatically and reliably,enhance the capability of aeroengine maintenance support,the feature extraction method based on color moments and gray level co-occurrence matrix (GLCM) is proposed to construct the feature database of the aeroengine's non-destructive detection images,and the support vector machine (SVM) is utilized as intelligent classifier for damages recognition.A dual mutation particles swarm optimization (DMPSO) algorithm is designed to optimize the kernel parameter and penalty factor for guaranteeing the recognition performance of SVM,dual mutation strategy improves the global optimization capability,and some complex test functions have been used to prove DMPSO'sperformance.Finally,the feature databases are constructed by different feature methods according to four damage types of certain aeroengine,and then the proposed SVM optimized by DMPSO is used for damage types recognition compared with back propagation (BP) network,extreme learning machine (ELM) network,and k-nearest neighborhood (k-NN).The recognition results have proven the proposed feature extraction method is more suitable for aeroengine damage recognition and is helpful to improve the accuracy of damage recognition.Meanwhile,the recognition performances of the four algorithms are compared,and the comparison results have demonstrated the optimized SVM always has better and stable recognition output.The comparison experiment has proven that the methods proposed in this paper are helpful to improve the recognition efficiency of aeroengine damage types.
Computer Graphics & Multimedia
Review of Human Action Recognition Technology Based on 3D Convolution
HUANG Hai-xin, WANG Rui-peng, LIU Xiao-yang
Computer Science. 2020, 47 (11A): 139-144.  doi:10.11896/jsjkx.200100094
Abstract PDF(3828KB) ( 1729 )   
References | Related Articles | Metrics
With the development of economy and society,tasks of video analysis are getting more and more attention.Meanwhile,human action recognition technology has been widely used in virtual reality,video surveillance,video retrieval,etc.Traditional human action recognition method is to use 2D convolution to process the input video,but 2D convolution can only extract the spatial features.However,the recognition based on manual extraction in complex environments is difficult to handle.Therefore,in the context of the success of deep learning and image classification tasks,a dual-flow network based on deep learning and a 3D convolution that can simultaneously extract temporal and spatial features emerges.3D convolution has developed rapidly in recent years,and has derived a variety of classic architectures,each with different characteristics.Each framework has its own optimization method and the effect of improving speed and accuracy.Based on the summary of several mainstream 3D convolutional frameworks and putting them into corresponding data sets for comparison and analysis,the advantages and disadvantages of each framework can be obtained accordingly,so as to find the optimal framework that is suitable for the actual situation.
Survey of Classification Methods of Breast Cancer Histopathological Images
MAN Rui, YANG Ping, JI Cheng-yu, XU Bo-wen
Computer Science. 2020, 47 (11A): 145-150.  doi:10.11896/jsjkx.191100098
Abstract PDF(3111KB) ( 2498 )   
References | Related Articles | Metrics
Histopathological examination of breast cancer is the “gold standard” for breast cancer diagnosis.The classification of breast cancer histopathological images has become a hot research topic in the field of medical image processing.The accurate classification of images has great application value in the fields of assisting doctors to diagnose the disease and meeting the needs of clinical application.This paper assesses the advantages and disadvantages of one breast cancer histopathological image classification algorithm.The methods are classified into two categories,depending on whether or not it is necessary to manually extract feature of breast cancer histopathological images or if the classification of breast cancer histopathological images can be based on a deep learning algorithm.The research on binary or multi-classification of breast cancer histopathology images is further tracked.Finally,the classification algorithm of breast cancer histopathology images using the latest theory of deep learning is gi-ven.Conclusions of the classification study of breast cancer histopathological images are drawn,and possible directions in the future are discussed.
Survey of Image Inpainting Algorithms Based on Deep Learning
TANG Hao-feng, DONG Yuan-fang, ZHANG Yi-tong, SUN Juan-juan
Computer Science. 2020, 47 (11A): 151-164.  doi:10.11896/jsjkx.200600009
Abstract PDF(4588KB) ( 3920 )   
References | Related Articles | Metrics
Image inpainting is a research field of image processing that provides solutions for image recognition in the presence of object occlusion and in the absence of critical parts of the image,attracts widespread attention in a wide range of fields.Image inpainted by deep learning methods have higher image resolution and reliability,which makes deep learning one of the mainstream methods of image inpainting.This paper introduces the basic principles and classical algorithms of the relevant deep learning methods,systematically and progressively dissects the representative image inpainting methods since 2010,explores the specific applications of deep learning-based image inpainting in different fields,and lists several research problems faced by this research field currently.
Survey on Aircraft Detection in Optical Remote Sensing Images
ZHU Wen-tao, XIE Bao-rong, WANG Yan, SHEN Ji, ZHU Hao-wen
Computer Science. 2020, 47 (11A): 165-171.  doi:10.11896/jsjkx.190500176
Abstract PDF(1810KB) ( 2396 )   
References | Related Articles | Metrics
Aircraft detection technology in optical remote sensing images has been widely used in urban planning,aviation and military reconnaissance.Despite a lot of research,there are still many problems to be solved.The paper review the research status of this technology.Starting from the thoughts on remote sensing image target detection,we divide the aircraft target detection methods into three categories and separately elaborate the concepts and research status of these three types of detection methods and conduct comparative analysis on this basis.We focus on the research of deep learning methods in this field and discuss the issues of sample and data set.Then we state the technical difficulties in aircraft target detection.Finally we consider and discuss the object detection task of high-resolution remote sensing image,and made a prospect for the future development of the field.
Survey of Monosyllable Recognition in Speech Recognition
ZHANG Jing, YANG Jian, SU Peng
Computer Science. 2020, 47 (11A): 172-174.  doi:10.11896/jsjkx.200200006
Abstract PDF(2336KB) ( 1288 )   
References | Related Articles | Metrics
Acoustic model modeling realizes the processing of speech signals and feature extraction,which is an essential basic work in the process of speech recognition and an important factor affecting the overall performance of speech recognition.In speech recognition,selecting appropriate modeling primitives can make subsequent systems obtain higher accuracy and stronger robustness.Syllable is the smallest pronunciation unit of Sino-Tibetan languages such as Chinese.According to its pronunciation characteristics,it is of great significance to study the use of syllable as the modeling element of Sino-Tibetan language speech re-cognition and to extract the corresponding features for recognition.In view of the current research progress of monosyllabic re-cognition,this paper first introduces the algorithm based on finite state vector quantization and the research results of its improved algorithm in monosyllabic recognition.Then the algorithm based on hidden Markov model is introduced,and the syllable recognition research results combining hidden Markov model with other algorithms are introduced in details,and then the algorithm based on neural network is introduced.Finally,the important development direction of monosyllabic recognition research in the future is summarized and proposed.
Study on Reconstruction of Indoor 3D Scene Based on Binocular Vision
CHEN Ying, ZHAO Lai-wang, ZHAN Hong-chen, DING Yao
Computer Science. 2020, 47 (11A): 175-177.  doi:10.11896/jsjkx.200400096
Abstract PDF(2682KB) ( 1389 )   
References | Related Articles | Metrics
In the early three-dimensional scene reconstruction,due to hardware constraints,it can not be very good for the three-dimensional reconstruction of the scene.With the hardware update iteration,at present,the structured light and dual vision system is used for for more efficient three-dimensional scene reconstruction.The hardware platform is built with zed binocular ca-mera and jabao Ad-10 electric cloud platform,and the point cloud information of the scene is obtained through binocular camera.Based on the stereo matching of the global sbgm (semi global block matching) algorithm and the red green blue depth map generation,the single scene cloud reconstruction is carried out,and through the orb feature matching and ICP (iterative closure Point) point cloud registration and fusion can realize panoramic 3D reconstruction of indoor scene.The experiment compares the advantages of binocular stereo vision scene reconstruction in far/near targets,low texture feature targets,glass and other material targets.At the same time,in the three-dimensional scene reconstruction of point cloud,this paper proposes to optimize the point cloud information through sparseness,and compares the reconstruction effect of single acquisition and multiple acquisition.After the experiment,the system can give consideration to the details of scene reconstruction and the display effect under the premise of the eclectic number of times,and it can be used for reference and application value for 3D reconstruction of different scene targets.
Study on Catenary Dropper and Support Detection Based on Intelligent Data Augmentation and Improved YOLOv3
LIU Shu-kang, TANG Peng, JIN Wei-dong
Computer Science. 2020, 47 (11A): 178-182.  doi:10.11896/jsjkx.200200053
Abstract PDF(3015KB) ( 996 )   
References | Related Articles | Metrics
Catenary is a transmission line over the railway to supply power for electric locomotives,its support and dropper both are the key components of railway power transmission,it will make a huge impact if there is a failure.Catenary accidents may occur in serious cases,could cause hidden danger of High-speed trains.It is of great significance to find an efficient and accurate posi-tioning method of these two equipments to facilitate the subsequent abnormal judgment.This paper focuses on this problem,pre-sents an intelligent data augment algorithm,it can randomly select one or more data augment methods to enhance the catenary picture.In addition,this paper proposes an improved YOLOv3 algorithm,5 groups of feature pyramids with different scales are designed by enhancing feature extraction network.Finally,combining the improved algorithm with the data augment algorithm,to realize dropper and support detection task.The mAP of the algorithm on the test dataset is 93.5%,the recognition rate is 45 fps.This method realizes the real-time detection of dropper and support under the high precision.
Crowd Counting Model of Convolutional Neural Network Based on Multi-task Learning and Coarse to Fine
CHEN Xun-min, YE Shu-han, ZHAN Rui
Computer Science. 2020, 47 (11A): 183-187.  doi:10.11896/jsjkx.200300012
Abstract PDF(2779KB) ( 931 )   
References | Related Articles | Metrics
Crowd counting refers to counting the number of people in a single image or a single video frame.In order to solve the problem of insufficient counting of crowd tasks,a crowd counting model based on multi-task learning and coarse to fine convolutional neural network is proposed.Firstly,multi-task learning means introducing auxiliary tasks related to the original task to guide the learning of the main tasks.The crowd density estimation is the main task of the crowd counting model,and the crowd segmentation task is used as an auxiliary task to improve network performance.Secondly,the proposed crowd counting model is able to predict the density map from coarse to fine.A rough and inaccurate crowd density map is generated,which is combined with the crowd segmentation map to obtain an accurate crowd density map.Experiments on the Shanghai Tech dataset Part A and Part B,and UCF_CC_50 dataset show that the proposed crowd counting model outperforms the state of the art CSRNet models by 4.55%,14.15% and 19.09% respectively,and the mean square error is reduced by 10.00 %,19.09% and 19.47% respectively compared with the SOTAs.The proposed model significantly improves the accuracy and robustness of the crowd counting model.
Ghost Imaging Reconstruction Algorithm Based on Block Sparse Bayesian Model
WU Xue-lin, ZHU Rong, GUO Ying
Computer Science. 2020, 47 (11A): 188-191.  doi:10.11896/jsjkx.200200058
Abstract PDF(3097KB) ( 750 )   
References | Related Articles | Metrics
Conventional camera systems use light transmitted or backscattered from an object to form an image on a film or focal plane detector array.Ghost imaging systems utilize the spatial correlation between separated light fields to obtain images without recording the images themselves,and have great application potential in remote sensing,medical,and microscopic imaging.A ghost imaging reconstruction algorithm based on block sparse Bayesian model is proposed to improve the problem that large-scale image reconstruction storage is difficult to achieve in traditional ghost imaging systems.This algorithm divides a large-size target image into several small-sized image blocks of the same size.Based on the Bayesian learning model,each image block is subjected to compressed sensing reconstruction.Subsequently,the reconstruction result of each image block is merged,resulting in the final target reconstructed image.The simulation results show that the image quality of the reconstructed image can be improved from the block sparse Bayesian ghost imaging reconstruction algorithm,and the large-size target image can be reconstructed effectively under the traditional computer configuration for practical implementations.
Visualization of DNA Sequences of Two Kinds of Bacteria Under Firmicutes
DU Liu-yun, ZHENG Zhi-jie, ZHENG Hua-xian
Computer Science. 2020, 47 (11A): 192-195.  doi:10.11896/jsjkx.191200070
Abstract PDF(3961KB) ( 771 )   
References | Related Articles | Metrics
In order to explore the relationship between biological layers,from complex bacterial communities to various species and genera,full sequence DNA sequencing has developed vigorously,and the need for visualization of scientific calculation data of various biological gene sequences has become increasingly urgent.The complementary symmetry between the double helix structure of DNA sequence and the complex structure of space is of great significance for exploring a large number of long DNA sequences.Starting from the core idea of “sequence determines structure and structure determines function”,this paper uses a measurement model and method based on variant value system to analyze and compare the complete DNA sequences of two kinds of bacteria,bacillus and Mycobacterium by using the method of combination of information technology and statistics,showing their two-dimensional characteristic distribution of DNA sequences.The visual form shows the similarities and differences between the two kinds of bacteria.Compared with the traditional bacterial visualization method,this method has the characteristics of low time complexity,good stability,strong intuition and easy to understand.A series of distribution diagrams are provided under different measurement reference options.
Study on Online Education Focus Degree Based on Face Detection and Fuzzy Comprehensive Evaluation
ZHONG Ma-chi, ZHANG Jun-lang, LAN Yang-bo, HE Yue-hua
Computer Science. 2020, 47 (11A): 196-203.  doi:10.11896/jsjkx.191100203
Abstract PDF(3181KB) ( 1677 )   
References | Related Articles | Metrics
Aiming at the problem of less supervised means for students in online education,the fuzzy comprehensive evaluation algorithm based on face detection detects the head's left (right) turning angle,head lifting (low) angle,eye closure,mouth closure and facial expression through face image detection,uses the left (right) turning angle of the head and the head lifting (low) angle to score the head posture.Based on the results of eye closure and mouth closure,the fatigue score is evaluated.Combined with facial expression detection results,the emotion score is evaluated.Then the fuzzy comprehensive evaluation method is used to quantitatively judge the concentration degree of learning according to the scores of head posture,fatigue and emotion.The algorithm is applied to the evaluation of students' classroom concentration on the online education platform,helping the instructors to timely acquire the classroom concentration of the students in the online classroom,and provide assistance for improving the teaching plan and urging students to learn.The learning concentration detection system of the online education platform designed based on this algorithm is used to simulate the scene use test,which can effectively evaluate the classroom concentration of the students according to the face detection result,and improve the classroom quality and the learning effect of the students.
Feature Selection Method for Behavior Recognition Based on Improved Feature Subset Discrimination
WANG Rui-jie, LI Jun-huai, WANG Kan, WANG Huai-jun, SHANG Xun-chao, TU Peng-jia
Computer Science. 2020, 47 (11A): 204-208.  doi:10.11896/jsjkx.200100030
Abstract PDF(2721KB) ( 781 )   
References | Related Articles | Metrics
Sensor-based human behavior recognition has been widely used in health monitoring,motion analysis and human-computer interaction.Feature selection acts a critical step when identifying human behaviors accurately,aiming to improve classification performance by selecting classification-related features,so as to reduce feature dimensions and computational complexity.The absence of feature redundancy,nevertheless,poses challenges to legacy feature selection methods.Therefore,to resolve the insufficiency that only feature correlation but not feature redundancy is involved in the Discernibility of Feature Subsets (DFS)-based feature selection method,a novel Redundancy and Discernibility of Feature Subsets (R-DFS)-based feature selection method is proposed to incorporate the redundancy analysis into feature selection process and remove redundant features,so as to improve classification accuracy rate and reduce computational complexity as well.Experimental results reveal that the improved method can efficiently reduce the feature dimension with the improved classification accuracy.
Neural Style Transfer Method Based on Laplace Operator to Suppress Artifacts
ZHANG Mei-yu, LIU Yue-hui, QIN Xu-jia, WU Liang-wu
Computer Science. 2020, 47 (11A): 209-214.  doi:10.11896/jsjkx.200100090
Abstract PDF(3303KB) ( 846 )   
References | Related Articles | Metrics
In image neural style transfer technology,most algorithms have artifacts that affect visual effects:checkerboard effects and textures that affect the semantic content of the original image.In this paper,an image style transfer method based on Laplacian suppression artifacts is proposed.Firstly,a transformation network for real-time neural style transfer is redesigned using hole convolution and 1×1 convolution filter kernels.Then,the transformed result is input to VGG for feature map detection,and the multi-layer feature and the original VGG feature are extracted and filtered by Laplace operator to calculate the L1 error.Constrain image changes to suppress artifacts.In the final encoder stage,the image content is modified using an encoder with added dropout.While deepening the network,the model size was controlled by 1×1 convolution filter kernels,which reduced the model size about 6%.Finally,experiments show that the results of this method are better than traditional methods in suppressing artifacts,and can produce images with good visual effects.
Recognition Algorithm of Welding Assembly Characteristics Based on Convolutional Neural Network
CHEN Jian-qiang, QIN Na
Computer Science. 2020, 47 (11A): 215-218.  doi:10.11896/jsjkx.200500067
Abstract PDF(2681KB) ( 854 )   
References | Related Articles | Metrics
In order to realize the intellectualization and automation of welding and assembling technology for high-speed white body,the problems of small feature area and multi-background interference in welding process are solved,a novel fast recognition algorithm of welding assembly based on migration learning and convolution neural network is proposed.Firstly,the traditional image processing algorithms such as binarization are used to determine the rough position of the feature to be extracted.On this basis,Sobel,corrosion and Hough line detection are used to determine the precise position of the feature area.Secondly,considering the different performance of feature regions in different environments,a classification model based on convolution neural network is adopted to enhance the robustness and accuracy of the prediction model.At last,Visual Geometry Group Network (VGG16) based on transfer learning is selected to solve the problem that the number of the samples is not enough to train the parameters of the whole model.The experimental results show that the recognition algorithm proposed in this paper can accurately identify the state of profile,and the detection speed is better than YOLOV3,and the accuracy is inferior to YOLOV3.The algorithm can meet the real-time requirements in the use scene.
Identification of Coal Vehicles Based on Convolutional Neural Network
MA Chuan-xiang, WANG Yang-jie, WANG Xu
Computer Science. 2020, 47 (11A): 219-223.  doi:10.11896/jsjkx.200100087
Abstract PDF(3378KB) ( 850 )   
References | Related Articles | Metrics
In order to prevent or avoid the occurrence of tax evasion and taxation caused by non-invoicing of mineral resources such as coal,sand and gravel,it is an effective way to use the deep convolutional neural network to automatically identify empty vehicles.Based on the AlexNet model,this paper proposes5 kinds of improvement ideas for the difference of empty car and heavy vehicle images,and finally obtains a structure of 6-layer convolutional neural network based on maxout+dropout.The test results of the picture of the 34 220 empty cars and loaded cars show that the model has achieved good results in terms of accuracy,sensitivity,specificity and precision.In addition,the model is highly robust and can successfully identify a large number of empty car images with different angles and different scenes.
Object Tracking Algorithm Based on Feature Fusion and Adaptive Scale Kernel Correlation Filter
MA Kang, LOU Jing-tao, SU Zhi-yuan, LI Yong-le, ZHU Yuan
Computer Science. 2020, 47 (11A): 224-230.  doi:10.11896/jsjkx.200500084
Abstract PDF(3926KB) ( 873 )   
References | Related Articles | Metrics
In the process of object tracking,an important way to improve the performance of the tracking algorithm is to improve the scale adaptive strategy and select features with strong discrimination ability.In order to solve the problemthat Kernel Correlation Filtering (KCF) can't adapt to the condition of object scale variation,andonlyusesthe single feature of Histogram of Oriented Gradient(HOG) whose discrimination ability to object is insufficient,a new scale adaptive strategy is proposed by studying the correlation response value of the same object at different scales,and finding the changing rule based on the analysis of a large number of statistical data,the method of linear weighted fusion of HOG and Color Name (CN) is also adopted to improves the object discrimination ability of the algorithm.Experimental results on OTB dataset show that the precision and success rate of the proposed algorithm are 8.5% and 28.9% higher than those of KCF algorithm,8.1% and 38.5% higher than those of KCF algorithm on scale variation attribute video sequence,and the performance on other attribute video sequence is also greatly improved,and the tracking speed reaches 37.68 FPS,which meets the real-time requirements.
Image Reconstruction Based on Ant Colony Algorithm
TIAN Xian-zhen, SUN Li-qiang, TIAN Zhen-zhong
Computer Science. 2020, 47 (11A): 231-235.  doi:10.11896/jsjkx.191000128
Abstract PDF(1892KB) ( 654 )   
References | Related Articles | Metrics
With the help of a computer to rejoin a large number of regular document image fragments,which can greatly improve the efficiency of work and reduce the labor costs.Therefore,it has been paid more and more attention by the academic community.At present,there are three main problems in the matching of English fragments with shape rules,one is the difficulty of fragment feature extraction,the other is the low efficiency of splicing,and the third is the low accuracy of splicing.For the first problem,a series of data statistics is adopted to eliminate the interference factors of the high and low English letters in this paper.For the second problem,ensuring that the number of each type of debris is the same,this paper establishes optimization model and uses Ant Colony algorithm to horizontal fast clustering.For the third problem,this paper sets up the distance function for two pieces by counting character pixel gray values of 8 neighborhoods,and then the ant colony algorithm is used for matching and accurate clustering.Finally,we take the 2013 National higher Education Cup mathematical modeling B as an example to verify the feasibility and effectiveness of the Ant Colony Algorithm.
Streamline Adaptive Color Mapping Enhancement Algorithm for Vector Field Visualization
QIN Xu-jia, CHEN Guo-fu, SHAN Yang-yang, ZHENG Hong-bo, ZHANG Mei-yu
Computer Science. 2020, 47 (11A): 236-240.  doi:10.11896/jsjkx.191100019
Abstract PDF(3921KB) ( 890 )   
References | Related Articles | Metrics
Streamline visualization belongs to geometric visualization.It is an important method to present vector field direction information in an intuitive way.In order to display the vector field strength information on the streamline and better display the vector field attributes,a streamline coloring method which adaptively selects the mapping mode according to the characteristics of the vector field strength distribution is proposed.The formula for calculating the skewness coefficient of vector field strength distribution is derived.Firstly,Sobol sequence is used to determine the position of the seed points of the streamline,and the streamline is generated according to the seed points.Then,the skewness coefficient of vector field strength data set is calculated.Compared with the threshold values,the appropriate color mapping method is selected to color the streamline.Experimental results show that the Sobol sequence can be used to determine the location of streamline seed points,and the streamline distribution of vector field can be more uniform.The vector field data distribution can be quantized as the skewness coefficient,and a more appropriate color mapping method can be selected according to the skewness coefficient,so that the effect of color mapping can better describe the actual situation of the field strength distribution of vector field.
3D Registration for Multi-b-value Diffusion Weighted Images of Liver
ZHANG Wen-hua, LIU Xiao-ge, WANG Pei-pei, LIU Jing-jing, CHENG Jing-liang
Computer Science. 2020, 47 (11A): 241-243.  doi:10.11896/jsjkx.200400060
Abstract PDF(1602KB) ( 682 )   
References | Related Articles | Metrics
Diffusion-weighted images with different b values have misalignments due to patients' respiratory movement during acquisition process and the image distortion when the b value is high.This paper proposes a new 3D registration algorithm to improve the overlapping rate of 3D multi-b-value diffusion-weighted images.Fitting accuracy is obtained by fitting DW images with intra-voxel incoherent motion (IVIM) model and then a weight matrix is constructed.The precise registration is performed by free-form deformation(FFD)which is weighted by the weight matrix adaptively.The overlapping rate of 12 series of multi-b-value DW images is obviously approved after registration especially for higher b-value images and the difference is significant.The proposed method has excellent registration property and is helpful for accurate quantitative analysis in clinical diagnosis.
Relevance Feedback Method Based on SVM in Shoeprint Images Retrieval
JIAO Yang, YANG Chuan-ying, SHI Bao
Computer Science. 2020, 47 (11A): 244-247.  doi:10.11896/jsjkx.200400032
Abstract PDF(2862KB) ( 686 )   
References | Related Articles | Metrics
In criminal investigation,the information retrieval of shoeprint images is of great significance for the detection of parallel cases.Accurately retrieving images of the same type as on-site shoe prints in a large-scale shoeprint image library is one of the problems that need to be solved now.On the basis of content-based image retrieval,a method combining support vector machine (SVM) and manual feedback is proposed.The K-means clustering algorithm is used to cluster the feature vectors extracted by SIFT (Scale Invariant Feature Transformation),construct the shoeprint image feature package,and sort the similarity to obtain the preliminary retrieval results.The corresponding classifier finally calculates the distance between the image and the hyperplane according to the classification result to measure the similarity of the images and returns the secondary search results.Experimental results show that the recall rate of the secondary search is 6% higher than that of the preliminary search among different returned results.
Ship Target Detection in Remote Sensing Image Based on S-HOG
DING Rong-li, LI Jie, ZHANG Man, LIU Yan-li, WU Wei
Computer Science. 2020, 47 (11A): 248-252.  doi:10.11896/jsjkx.191200090
Abstract PDF(3222KB) ( 1104 )   
References | Related Articles | Metrics
With the continuous development of high-resolution satellite remote sensing imaging technology,ship target detection based on visible remote sensing image has become a hot topic,which is of great strategic significance in military fields such as warship detection,precise guidance,and civilian fields such as sea search and rescue,fishing vessel monitoring,etc.Aiming at the problem that ship detection in remote sensing image is easy to be interfered by cloud,wave,island and other factors,which leads to high false alarm rate,a ship identification algorithm based on the characteristics of ship histogram of oriented gradient (S-HOG) is proposed.Firstly,the candidate region of the target is extracted by abnormal point detection to get the suspicious target slice,and then the S-HOG feature is counted to eliminate the false alarm,so as to effectively extract the real ship target.Experimental results show that the algorithm can significantly reduce the false alarm rate while ensuring high detection rate,and has strong anti-interference ability and high robustness.
Multi-level Ship Target Discrimination Method Based on Entropy and Residual Neural Network
LIU Jun-qi, LI Zhi, ZHANG Xue-yang
Computer Science. 2020, 47 (11A): 253-257.  doi:10.11896/jsjkx.191100006
Abstract PDF(3255KB) ( 753 )   
References | Related Articles | Metrics
In order to remove false alarms in the candidate regions of ship target,a multi-level false alarms discrimination method based on entropy and residual neural network is proposed.Firstly,based on the difference in entropy between the image slices of ships and false alarms,the most false alarms in the candidate regions are removed with the threshold of entropy.In order to confirm the ship target,a deep residual neural network model for image slice classification is designed and the transfer learning methodcalled finetuning is adopted to train deep residual neural network,to realize the automatic classifying of the ship and false alarm.Experimental results show that the proposed method achieves a good discrimination effect and achieves effective elimination of false alarms such as islands,clouds and sea clutter.It is simple and efficient,and no complicated identification work is needed in the subsequent process.
MACTEN:Novel Large Scale Cloth Texture Classification Architecture
LI Hao-xiang, LI Hao-jun
Computer Science. 2020, 47 (11A): 258-265.  doi:10.11896/jsjkx.191200115
Abstract PDF(4142KB) ( 858 )   
References | Related Articles | Metrics
The miscellany of fabric and the complexity of its texture have been viewed as enormous challenges to distinguish artificially.The merging multi-scale attention co-occurrence representation's residual texture encoding network(MACTEN) has been proposed with the introduction of the deep learning technology.And based on that,the large-scale fabric classification system on the web has been carried out.The MACTEN mainly composed of attention co-occurrence representation module (ACM) and improved residual coding module (REM),as well as multi-scale texture coding fusion module (MTEM).In this work,the mechanism of attention has been implemented into ACM to deal with different types of clothes,which adaptively adjusts the weight of texture co-occurrence features,and optimizes the joint distribution of co-occurrence features by expanding the co-occurrence domain to form more refined texture co-occurrence features.Moreover,the improved residual coding,including global texture information of spatial invariance,has been obtained with introduction of dictionary learning method into REM,which can solve the problem of disordered representation of cloth texture effectively.Finally,MTEM combined multiple scale attention texture co-occurrence features and cascaded residual texture coding as descriptors,can represent different shape and size of disordered fabric texture.On self-building cloth dataset,MACTEN has exhibited better performance than other baseline algorithms.Furthermore,the experimental results of KTHTIPS,FMD and DTD datasets show that MACTEN can be generalized as a general texture classification algorithm.
Study on 3D Color Slicing Technology Based on OBJ Model
XING Jing-pu, LI Feng-qi, WANG Sheng-fa, WANG Yi, ZONG Gui-sheng, FAN Yong-gang
Computer Science. 2020, 47 (11A): 266-270.  doi:10.11896/jsjkx.200200029
Abstract PDF(1900KB) ( 1089 )   
References | Related Articles | Metrics
In recent years,with the continuous progress of 3D printing technology,color 3D printing is becoming the general demand of the industry.However,as the standard file format of 3D printing field model description,STL file does not retain the color information of 3D model,which cannot meet the new requirements of color 3D printing for model information extraction and slice processing.In this context,the OBJ color model is selected as the research object of color slicing technology.Its file structure is analyzed,and the model geometry and color related information stored therein are extracted and optimized for slicing.Combined with the specific characteristics of OBJ model,the color slicing algorithm based on model continuity is proposed on the basis of traditional topological slicing algorithm,and the whole flow of the algorithm is given.The slice processing of the algorithm improves the efficiency of the layered processing,obtains the layered contour information of the model,and completes the processing of the color slice.The experimental results prove that the technology can perform color slice processing on the OBJ model with good effect,stability and reliability.
Marker-constrained Interactive Segmentation of 3D Animated Meshes
ZHENG Lei, WU Jun-wei, LIN Jun-mian, PAN Xiang
Computer Science. 2020, 47 (11A): 271-275.  doi:10.11896/jsjkx.200400030
Abstract PDF(3489KB) ( 666 )   
References | Related Articles | Metrics
Existing interactive approaches only works for single 3D meshes.In view of this,this paper proposes an interactive algorithm of segmenting 3D animated meshes based on 3D data correspondence.Firstly,users can mark some points on any one 3D mesh for interactive segmentation.Then,the algorithm can map users,marks to othermeshes by geodesic distance and isometric mapping.Finally,it performs interactive segmentation of other frames by transferred markers and iso-lines.Experimental results show that the algorithm can effectively segment different kinds of 3D animations.In addition,it can effectively improve the segmenting quality and is the exiting algorithms.
Application and Research of Image Semantic Segmentation Based on Edge Computing
WANG Sai-nan, ZHENG Xiong-feng
Computer Science. 2020, 47 (11A): 276-280.  doi:10.11896/jsjkx.200900046
Abstract PDF(3245KB) ( 1542 )   
References | Related Articles | Metrics
With the extensive application of deep learning in medical imaging segmentation,drug detection and other medical fields,semantic segmentation technology plays a pivotal role.Semantic segmentation combines two techniques of target detection and image recognition.It aims to segment the image into multiple groups of regions with specific semantics,which is a dense classification problem at the pixel level.However,in order to promote the effective development of mobile visual recognition technology,the traditional deep learning model cannot meet the requirements of mobile devices in terms of power consumption,memory management,and real-time performance.Edge computing is a new architecture mode that effectively extends the computing,network,storage,and bandwidth capabilities from the host to the mobile edge to implement model inference operations in a limited computing resource environment.Therefore,this paper attempts to complete the transformation,deployment and inference operation of the classic image semantic segmentation model,such as FCN,SegNet,U-Net,etc,on the development board based on the edge TPU coprocessor,and verifies the correctness and performanceof the proposed semantic segmentation model on the collec-ted real drug dataset.
Video Fusion Method Based on 3D Scene
NING Ze-xi, QIN Xu-jia, CHEN Jia-zhou
Computer Science. 2020, 47 (11A): 281-285.  doi:10.11896/jsjkx.200400049
Abstract PDF(2762KB) ( 2781 )   
References | Related Articles | Metrics
With the continuous progress of science and technology and social development,urban security monitoring system is also constantly improved,how to make full use of the massive monitoring video data has become a hot issue.This paper proposes a method of multi-channel video fusion based on 3D scenes.In this method,multiple video streams from a specified region of a 3D scene are fused into a complete image and projected into the region,thus enhancing the authenticity and effectiveness of the virtual 3D scene.At the same time,aiming at the problem of projection texture mapping,this paper proposes a texture mapping algorithm to solve the problem of occlusion penetration caused by the lack of depth information.By establishing a one-to-one correspondence between the model vertices and the texture coordinates,video frames are projected as textures into 3D scene models.Experimental results show that the proposed method can effectively integrate video and 3D scene models.
Computer Network
5G Network-oriented Mobile Edge Computation Offloading Strategy
TIAN Xian-zhong, YAO Chao, ZHAO Chen, DING Jun
Computer Science. 2020, 47 (11A): 286-290.  doi:10.11896/jsjkx.200200028
Abstract PDF(2069KB) ( 876 )   
References | Related Articles | Metrics
Mobile edge computing (MEC) technology is one of the important research directions of current wireless sensor networks.MEC technology can offload local computing tasks of wireless sensor devices to the edge cloud server for computing,thereby greatly improve the computing capacity of wireless sensor networks.However,a large number of devices in the wireless network perform computation offload at the same time,which will cause signal interference and excessive computational load on the edge cloud server.First,in order to improve the computation quality of wireless networks,a reasonable time allocation and computation offloading strategy for minimizing the computing time period of a MEC system with multiple wireless sensor devices is proposed,and 5G non-orthogonal multiple access and successive interference cancellation technology enables multiple wireless devices to perform computation offloading at the same time using the same subcarrier,there by improving the efficiency of computation offloading.Then the related models of wireless device energy harvesting and task computing are established,which are modeled as an optimization problem according to the above models and strategies,and the problem is solved.Finally,the effectiveness of the proposed strategy is verified by numerical analysis experiments.
LRBG-based Approach for IP Geolocation
ZHAO Qian, CHEN Shu-hui
Computer Science. 2020, 47 (11A): 291-295.  doi:10.11896/jsjkx.200300078
Abstract PDF(2854KB) ( 1038 )   
References | Related Articles | Metrics
IP geolocation determines the geographic location of network devices based on their IP addresses,which are the identifications of Internet devices.Landmark is a key factor in IP geolocation.Prior methods use home PCS,web servers as well as common routers as landmarks,they produce erroneous results due to changeable IP addresses,inconsistent density as well as complicated geometric relations between time delay and distance.Traceroute command is able to find all the routers between a probe and the target host.This paper proposes a new method named Last-hop Router Based Geolocation method(LRBG).The last-hop rou-ter in a traceroute path is used as the landmark.The problem is solved by two steps.The first step is to employ the fixed Internet users within the range of a last router's delivery to infer its location.The second step is to identify the geographic location of target host based on the relation between the target host and the last hop router.The experiment results show that the LRBG me-thod achieves street-level geolocation of IP address with an average accuracy of 3.17 km.
Energy-balanced Multi-hop Cluster Routing Protocol Based on Energy Harvesting
LI Zheng-yang, TAO Yang, ZHOU Yuan-lin, YANG Liu
Computer Science. 2020, 47 (11A): 296-302.  doi:10.11896/jsjkx.200300002
Abstract PDF(2248KB) ( 705 )   
References | Related Articles | Metrics
Existing energy harvesting wireless sensor network clustering routing protocols focus on cluster head selection and cluster construction,and less research on inter-cluster routing.Inter-cluster routing mostly uses the minimum hop count or minimizing transmission energy between clusters as a strategy,without comprehensive consideration of data transmission energy consumption,nodes distribution,nodes'energy status and energy collection.The protocol cannot effectively balance the energy consumption of nodes,and the network near the base station is prone to the problem of energy hole.Aiming at the above problems in the network,an energy-balanced multi-hop clustering routing protocol based on solar energy is proposed.The protocol sets the number of clusters in each unit through reasonable area division to achieve non-uniform clustering of the network and balance the energy consumption of cluster head nodes in different units.In the cluster head selection stage,the nodes calculate the cluster head weights according to their own energy distribution and neighbor distribution,select the cluster head in turn,which can effectively balance the energy consumption of the nodes in the cluster.Finally,the protocol designs a routing strategy based on the PSO algorithm,improves the energy consumption efficiency of data transmission between clusters,and ensures the balanced energy consumption of the nodes in the transmission path.Through simulation analysis,the performance of this protocol to balance node energy consumption has obvious advantages over other protocols.It can maintain the stable period of the network for a long time and have higher network throughput.
Fuzz Testing of Android Inter-component Communication
ZHAO Sai, LIU Hao, WANG Yu-feng, SU Hang, YAN Ji-wei
Computer Science. 2020, 47 (11A): 303-309.  doi:10.11896/jsjkx.200100122
Abstract PDF(2430KB) ( 915 )   
References | Related Articles | Metrics
The Android operating system provides a rich inter-application messaging mechanism,in which intent-based communication is an important inter-component communication mechanism in Android.This mechanism facilitates the collaboration of applications and reduces the burdens for developers through increasing component reuse.It is possible that this message-passing mechanism will be abused,such as the application send erroneous messages to the target application,which can result in the target crash.Aiming at this problem,a robustness detection method based on the fuzzy test is proposed and an intent fuzzy test tool ICCDroidFuzzer is implemented.The method uses static analysis to obtain component-related information to construct the test suites and send them to the target components.At the same time,the tool monitors the Android system logs to find ifthere is a run crash.We examined 420 real business applications using ICCDroidFuzzer.The results demonstrate 19 exceptions that cause the application crash.This tool automatically tests the robustness of applications and is suitable for testing a large number of Android applications without human intervention.
Multi-hop Dynamic Resource Allocation Protocol with Guaranteed QoS
ZHANG Hua-wei, XIE Dong-feng, ZOU Yan-fang, HU Yong-hui
Computer Science. 2020, 47 (11A): 310-315.  doi:10.11896/jsjkx.200400068
Abstract PDF(2209KB) ( 719 )   
References | Related Articles | Metrics
According to the characteristics of having no center,changeable network topology,multi-hop nodes sharing channel resources,and diverse service of Ad Hoc,a multi-hop dynamic resource allocation protocol with guaranteed QoS is proposed.The design frame structure is consisted of three parts:bootstrap timeslots,broadcast/standby timeslots and contention timeslots.Meanwhile,the structure could achieve fair access of multi-node as well as meet the requirements of real-time service delay by taking following methods:using three-hop conflict prevention method to reuse channel resources,reduce the noise threshold at the receiving node to minimize the possibility of conflicts;preempting idle broadcasts or reserved slots according to QoS requirements and preemption criteria which is composed of operation priorities,probability of slot free as well as continuous probability of free time slots;describing the convergence process of bootstrap timeslots,broadcast/standby timeslots and contention timeslots respectively.Based on MATLAB visible simulation results,it can be concluded that the proposed resource allocation method can improve the network packet delivery fraction,reduce the average delay.Furthermore,this method is more suitable for networks with heavy loads and large number of nodes.
Real-time Network Traffic Prediction Model Based on EMD and Clustering
YAO Li-shuang, LIU Dan, PEI Zuo-fei, WANG Yun-feng
Computer Science. 2020, 47 (11A): 316-320.  doi:10.11896/jsjkx.200100085
Abstract PDF(2995KB) ( 847 )   
References | Related Articles | Metrics
Based on the multiple characteristics of complex network traffic,the traditional single model has poor prediction results.In order to improve the accuracy and real-time performance of traffic prediction,a network traffic prediction model based on EMD and clustering is proposed.First,the network traffic is decomposed into IMFs through EMD.IMFs are on different time scales and their frequencies are relatively single.Secondly,IMFs are clustered by an improved K-means clustering algorithm,and IMFs with similar complexity are gathered.Then the clustered IMFs are predicted using the ARMA model.Finally,the predicted values of each IMF are summed to obtain the predicted value of overall network traffic.Experimental results show that,compared with the EMD-ARMA model,the model not only reduces the training time,and its MSE and MAE reduce by 3.8% and 7.6% respectively,APT improves by 6 percentage.The model achieves higher prediction accuracy of network traffic and can be used for real-time traffic prediction.
Dynamic Adaptive Multi-radar Tracks Weighted Fusion Method
ZHANG Liang-cheng, WANG Yun-feng
Computer Science. 2020, 47 (11A): 321-326.  doi:10.11896/jsjkx.2004000145
Abstract PDF(3370KB) ( 1338 )   
References | Related Articles | Metrics
In order to form a more accurate fused track by using multi-source radar track data,the theoretical method of multi-source information fusion classical dynamic weighting method and Kalman filter technology are studied.A dynamic adaptive weighted fusion method of multi-source radar information is designed.To overcome the disadvantage of the static assignment weighted fusion method when the radar detection accuracy and detection environment are unknown,setting up a quality factor which contains 4 subitem weights that reflect the quality characteristics of the data source,and real-time analysis of the quality of radar track reports.Depending on the quality factor to complete multi-source fusion dynamically,and obtain better accuracy fusion track.After practical testing and simulation test,it proves that this method is effective and steady.
New Method of Traffic Flow Forecasting of Connected Vehicles Based on Quantum Particle Swarm Optimization Strategy
ZHANG De-gan, YANG Peng, ZHANG Jie, GAO Jin-xin, ZHANG Ting
Computer Science. 2020, 47 (11A): 327-333.  doi:10.11896/jsjkx.191200126
Abstract PDF(3495KB) ( 838 )   
References | Related Articles | Metrics
This paper proposes a traffic flow prediction algorithm for connected vehicles based on quantum particle swarm optimization strategy.Establishing a corresponding model based on the characteristics of the traffic flow data,apply the genetic simulated annealing algorithm to the quantum particle swarm algorithm to obtain the optimized initial cluster center,and apply the optimized algorithm to the parameter optimization of the radial basis neural network prediction model.The high-dimensional mapping to the basic neural network yields the desired predicted data results.In addition,in order to compare the performance of the algorithms,a comparison study with other related algorithms such as QPSO-RBF is also performed.Simulation results show that,compared with other algorithms,the proposed algorithm can reduce prediction errors and get better and more stable prediction results.
Indoor Positioning Method Based on UWB Odometer and RGB-D Fusion
WANG Wen-bo, HUANG Pu, YANG Zhang-jing
Computer Science. 2020, 47 (11A): 334-338.  doi:10.11896/jsjkx.200200033
Abstract PDF(2307KB) ( 1095 )   
References | Related Articles | Metrics
Aiming at the problem of tracking failure caused by rapid movement of single RGB-D camera slam,an indoor location method based on UWB,odometer and RGB-D fusion is proposed.Based on the location of UWB,this method uses Odometer to reduce the inherent drift error of UWB.Using the idea of weighted average,only a small part of computing resources can be consumed to fuse the sensors and improve the accuracy of the system.Experimental results show that the method can suppress the location error within 10 cm and the deflection angle error within 1 °.It can completely solve the problem of tracking failure when a single RGB-D camera slams.
Study on Optimization of Heterogeneous Data Fusion Model in Wireless Sensor Network
HUANG Ting-ting, FENG Feng
Computer Science. 2020, 47 (11A): 339-344.  doi:10.11896/jsjkx.200100109
Abstract PDF(2578KB) ( 861 )   
References | Related Articles | Metrics
Aiming at the problems of energy consumption and network security in wireless sensor network,this paper proposes a data fusion model of wireless sensor network from the point of view of data fusion.The model introduces information entropy to realize a new way of calculating trust degree,completes the establishment of trust mechanism with monitoring and filtering of abnormal data,and improves the security and reliability of wireless sensor network through trust mechanism.In order to solve the problem of poor estimation effect and filtering divergence in strong nonlinear systems,the unscented Kalman filter algorithm is super imposed and the attenuation factor is introduced in the observation noise covariance matrix when the unscented Kalman filter is used for the first time.The simulation results show that the proposed algorithm improves the accuracy of filtering results compared with the traditional algorithm.
Architecture Strategy of D2D Content Edge Cache Based on Particle Swarm Optimization
MENG Li-min, WANG Kun, ZHENG Zeng-qian, JIANG Wei
Computer Science. 2020, 47 (11A): 345-348.  doi:10.11896/jsjkx.200500079
Abstract PDF(2007KB) ( 713 )   
References | Related Articles | Metrics
In the extreme case that the communication network infrastructure is paralyzed,how to ensure the interconnection and efficient centralized control of the rescue terminal equipment network,and how to solve the problem of gathering and sharing all kinds of information on the command site are the key problems.In order to continue the information transmission and reduce the probability of interruption,this paper studies the edge cache-assisted device-to-device (D2D) communication overlay network.Method,establish a D2D communication edge cache architecture,optimize the cache index,and construct a virtual logical mapping channel of the D2D overlay network by detecting the cache architecture of the terminal nodes.A single particle based on adaptive inertial weight binary particle swarm algorithm under emergency conditions Strategy of D2D content edge cache architecture for soldier terminal equipment.Experimental results show that the algorithm's edge cache preset strategy has a higher cache index and is conducive to better information transmission.
Joint Sparse Channel Estimation and Data Detection Based on Bayesian Learning in OFDM System
CHEN Ping, GUO Qiu-ge, LI Pan, CUI Feng
Computer Science. 2020, 47 (11A): 349-353.  doi:10.11896/jsjkx.191100090
Abstract PDF(1928KB) ( 910 )   
References | Related Articles | Metrics
It is well known that the impulse response of a wide band wireless channel is approximately sparse,in the sense that it has a small number of significant components relative to the channel delay spread.In this paper,two sparse channel estimation algorithms based on spare bayesian learning (SBL) method are proposed for orthogonal frequency division multiplexing (OFDM) system,which we call SBL algorithm and J-SBL algorithm.In the case of unknown channel measurement matrix,the proposed algorithms can still estimate channel taps effectively.Compared with the classical algorithms:orthogonal matching pursuing (OMP) algorithm and variational messaging(VMP) algorithm,montecarlo simulation shows that the proposed algorithms perform better than classical algorithms in terms of the same mean square error and bit error rate and their SNR is improved by 3~5 dB.
Information Security
Research on Application of Blockchain Technology in Field of Spatial Information Intelligent Perception
GUO Chong-ling, ZHAO Ye
Computer Science. 2020, 47 (11A): 354-358.  doi:10.11896/jsjkx.200400044
Abstract PDF(1789KB) ( 1951 )   
References | Related Articles | Metrics
In this paper,the application of blockchain technology in the field of spatial information intelligent perception is reviewed.Based on the technical characteristics of blockchain intelligent contract and the practical experience of blockchain in the field of Internet of things,the blockchain is analyzed from three aspects:improving the ability of spatial information data acquisition,increasing the reliability of spatial information data extraction and expanding the application of spatial information data.The integration direction and technical approach of technology and spatial information perception are proposed,and the future strategic development direction of intelligent remote sensing is proposed to improve the digital,dynamic and real-time level of spatial information acquisition and application.
Review of Clock Glitch Injection Attack Technology
YANG Peng, OU Qing-yu, FU Wei
Computer Science. 2020, 47 (11A): 359-362.  doi:10.11896/jsjkx.200100096
Abstract PDF(1790KB) ( 1784 )   
References | Related Articles | Metrics
Clock glitch injection is an effective and commonly used fault injection method in the real environment.Clock glitch injection is to introduce a period of glitch clock in the normal clock cycle,so that one or more triggers accept the error state to modi-fy the instruction,destroy the data or state,and finally make the secret information in the chip leak with the error operation.This paper analyzes the causes of clock failure.Several main glitch injection mechanisms are described,including clock switching at the same frequency,clock switching at different frequencies,and fuzzy clock injection.Finally,the latest practical applications and future development directions of the three clock glitch injection attacks are introduced.
Webshell File Detection Method Based on TF-IDF
ZHAO Rui-jie, SHI Yong, ZHANG Han, LONG Jun, XUE Zhi
Computer Science. 2020, 47 (11A): 363-367.  doi:10.11896/jsjkx.200100064
Abstract PDF(4316KB) ( 1069 )   
References | Related Articles | Metrics
With the rapid development of Internet,cyber attacks are becoming more frequent.Webshell is a common cyber attack method,and traditional detection methods are unable to cope with complex and flexible variants of Webshell attacks.In order to solve this problem,webshell detection method based on TF-IDF is proposed.First of all,the system classifies Webshell files and transcodes different files accordingly to reduce the impact of confusion and interference technology on detection,then build a bag of words model and use TF-IDF algorithm to weight extract relevant features,and finally uses the XGBoost algorithm to train the detection model.Compared with the traditional machine learning algorithm,the Webshell detection model based on TF-IDF and XGBoost algorithm has higher accuracy than the traditional detection method,and has stronger robustness and generalization capabilities.The detection accuracy of XGBoost algorithm for PHP type files can reach 98.09%,and the accuracy for JSP type files can reach 97.09%.
Design and Analysis of Trapdoor S-Box Based on Linear Partition
HAN Yu, ZHANG Wen-zheng, DONG Xin-feng
Computer Science. 2020, 47 (11A): 368-372.  doi:10.11896/jsjkx.191200036
Abstract PDF(1975KB) ( 1005 )   
References | Related Articles | Metrics
The block cipher algorithm with trapdoor is a kind of cipher algorithm that can meet the special needs in specific scenarios.The trapdoor function is widely used in asymmetric encryption algorithms.The idea of trapdoor function in asymmetric encryption is considered to be introduced into block cipher.the S-box isthe core of block cipher,which is the only non-linear component in mostly block cipher algorithm.It plays a role of confusion in the encryption process.Therefore,when constructing the trapdoor of the block cipher,the main research is to implant trapdoor into S-box.Aiming at this problem,this paper first studies the method of constructing trapdoor S-box based on the algebraic properties of linear partition of finite fields based on cosets.The trapdoor information is the linear partition method.This article first introduces the principle of trapdoor algorithm and trapdoor S-box based on linear partition.The 8×8 trapdoor S-box mapped on the linear partition is constructed,and the specific construction method is given.The linear and differential properties of this type of S-box are analyzed.In order to illustrate the safety and practicability of this type of S-box,the trapdoor block cipher proposed by Bannier et al is used as a model to briefly verify andana-lyze the effectiveness of the trapdoor,and prove the safety of trapdoor S-box and trapdoor algorithm to linear analysis and differential analysis.
Analysis of Private Cloud Resource Allocation Management Based on Game Theory in Spatial Data Center
ZHAI Yong, LIU Jin, LIU Lei, CHEN Jie
Computer Science. 2020, 47 (11A): 373-379.  doi:10.11896/jsjkx.200500106
Abstract PDF(2057KB) ( 699 )   
References | Related Articles | Metrics
In view of the problems of waste and inefficiency with the use of private cloud resources in spatial data centers,the driving motivation of user resource possession is analyzed by the mathematical method of algorithmic game theory.It is concluded that when resources are shared equally among users,the global satisfaction is the largest under the premise that everyone checks and balances each other.Based on the above conclusion,the characteristics of resource use under the premise of individual priority and collective priority are further analyzed.It is concluded that it is better to adopt a resource allocation model under the collective priority,which not only can maintain maximum global satisfaction,but also sustainable use of resources.Based on the above two conclusions,resource allocation and management game model is constructed,which features are user autonomy and IT management department support under the premise of collective priority,and the mathematical methods of resource allocation decision-making and user behavior analysis and user satisfaction evaluation are given.Then,the applicability of the proposed resource allocation and management game model and satisfaction evaluation method are verified by the calculation and verification of the actual data in the spatial data center.This algorithm has reference value for solving the problem of the low utilization rate of private cloud resources in the spatial data center.
Participant-adaptive Variant of MASCOT
LI Yan-bin, LIU Yu, LI Mu-zhou, WU Ren-tao, WANG Peng-da
Computer Science. 2020, 47 (11A): 380-387.  doi:10.11896/jsjkx.200400091
Abstract PDF(1876KB) ( 1272 )   
References | Related Articles | Metrics
Over the last decade,secure multi-party computation (MPC) has made a great stride from a major theoretical area to the multi-functional tool for building privacy-preserving applications.At CCS 2016,Keller et al.presented MPC protocol MASCOT with preprocessing phase based on oblivious transfer (OT),instead of somewhat homomorphic encryption that classical SPDZ adopts,which improves by two orders of magnitude compared to SPDZ.Due to its superior performance and high availability,MASCOT has drew a lot of attention from industry.But in practical application environment,there are still users' needs that MASCOT cannot satisfy.The main disadvantage is that it is unable to handle changes in the set of parties during online computing phase.A straight forward solution is to regenerate the raw data materials required for online computation by rerunning the entire preprocessing phase among the new set of parties,which obviously results in a serious waste of data and time resources.For this practical issue,the main components of MASCOT are tweaked to adapt to the various changes of the set of parties,including new parties joining in,old parties dropping out and new parties replacing old parties.By strictly restricting the communications for pre-processed data to parties that have changed,or between parties those have changed and who have not changed,the whole preprocessing phase is avoided to be redone among parties remained after change,and it effectively reduces the data and time for suit parties changing.In addition,the minor modification of MASCOT is carried out on the premise of ensuring the functionality,performance and security consistent with the original MASCOT.In a word,the participant-adaptive variant of MASCOT is closer to the actual application environment and is suitable for extensive deployment in applications with privacy.The technique can also be easily used to add participant adaptability to deployed MASCOT protocol as it only fine-tunes the preprocessing phase in a subtle way.
Study on Secure Log Storage Method Based on Blockchain
LIU Jing, HUANG Ju, LAI Ying-xu, QIN Hua, ZENG Wei
Computer Science. 2020, 47 (11A): 388-395.  doi:10.11896/jsjkx.200400024
Abstract PDF(2427KB) ( 1061 )   
References | Related Articles | Metrics
With the rapid development of computer science,the number of alarm logs is increasing geometrically.Alarm logs record the correlation information of attack behavior and are vulnerable to theft and tempering,and the retrieval results contain a lot of irrelevant logs,thus interfering the correctness of log analysis.In order to solve the problems of safe storage and data extraction of alarm logs,this paper proposes a log secure storage method based on blockchain.Alarm logs are stored in distributed stora-ge system based on block chain,which index library records block storage location.The traditional block chain sequential retrievalis replaced by querying the block index library,which improves the retrieval speed of ciphertext log.Through threat assessment of attack source addresses of alarm logs,and build a ciphertext index structure,which is stored in the block header.Alarm logs classified to the same attack scenario are associate retrieved based on correlation analysis.According to the experimental results,using the log secure storage method based on blockchain to store alarm logs,the block generation efficiency will not greatly reduce due to the index construction,and the log retrieval efficiency is high and the attack scenario logs can be obtained.
Analysis of Kaminsky Attack and Its Abnormal Behavior
CHEN Xi, FENG Mei, JIANG Bo
Computer Science. 2020, 47 (11A): 396-401.  doi:10.11896/jsjkx.200100060
Abstract PDF(2478KB) ( 1341 )   
References | Related Articles | Metrics
Kaminsky attack is a kind of remote DNS poisoning attack.Since the attack is successful,requests for resolving the name of second-level domain are directed to a fake authoritative domain name server.This article proposes a novel method for detecting abnormal behaviors against Kaminsky attack s based on attack signatures.First,features such as time,IP,DNS Flags,and DNS Transaction ID in DNS packets are extracted.Then sliding window its applied to deduplicate the Transaction ID and calculate the conditional entropy of Transaction ID under the condition of the same IP address.Finally,improved CUSUM algorithm is applied to analyze time series of the conditional entropy to detect attack time.In addition,with data within the detected attack time,the conditional entropy could be traced back to the IP addresses of the poisoning target named the authoritative domain name server.The analysis sample consists of attack traffic and normal traffic.With different parameters of the attack code,simulations verify that this method not only has a small time complexity,but also has a low false positive rate,a low false negative rate,and a high detection rate.It is an effective means of detection and analysis.
Big Data & Data Science
Research on Training Sample Data Selection Methods
ZHOU Yu, REN Qin-chai, NIU Hui-bin
Computer Science. 2020, 47 (11A): 402-408.  doi:10.11896/jsjkx.191100094
Abstract PDF(1755KB) ( 3493 )   
References | Related Articles | Metrics
Machine learning,as an important tool in data mining,not only explores the cognitive learning process of human beings,but also includes the analysis and processing of data.Faced with the challenge of massive data,at present,some researches focus on the improvement and development of machine learning algorithm,while others focus on the selection of sample data and the reduction of data set.The two aspects of researches work in parallel.The selection of training sample data is a research hotspot of machine learning.By effectively selecting sample data,extracting more informative samples,eliminating redundant samples and noise data,thus improving the quality of training samples and obtaining better learning performance.In this paper,the exis-ting methods of sample data selection are reviewed,and the existing methods are carried out in four aspects:sampling-basedme-thod,cluster-based method,nearest neighbor classification rule-based method and other related data selection methods.Summarize and analyze the comparison,and put forward some conclusions and prospects for the problems existing in the training sample data selection method and future research directions.
Duplicate Formula Detection Based on Deep Convolutional Neural Network
CHEN Ang, TONG Wei, ZHOU Yu-qiang, YIN Yu, LIU Qi
Computer Science. 2020, 47 (11A): 409-415.  doi:10.11896/jsjkx.200100108
Abstract PDF(2208KB) ( 1008 )   
References | Related Articles | Metrics
In recent years,with the development of educational intelligence,the Internet education model has become an important carrier of education and teaching.Various online education systems provide learners with a convenient way to learn their vast amount of test resources.However,the accumulated exercise resources suffer from the high repetition rate and low quality due to various sources of test questions and inconsistent collection methods.Therefore,how to accurately and efficiently monitor test questions is an important way to refine network resources and improve the quality of network test questions.In this context,this paper focuses on the problem of repeated detection of picture formulas in science test resources.Through accurate formula recognition detection,it can eliminate the interference of test questions semantics,and then improve the test resource monitoring.In response to this problem,the traditional formula repeat detection method is often based on manually defined rules and difficult to apply to large-scale formula data detection because of cumbersome identification steps,low accuracy and low efficiency.Based on this,this paper proposes a formula repeated detection method based on deep convolutional neural network.Firstly,a multi-channel convolution mechanism is used to automate the extraction and processing of formula picture features,making it suitable for large-scale formula data detection.Then,using the end-to-end output mode,the accumulation of errors that may be caused by too many intermediate steps in the traditional method is avoided.Finally,in order to verify the accuracy and practicability of the model,this paper has carried out sufficient experiments on the standard test data set and the data set of the simulated scan noise.The experimental results show that this method can effectively process the formula pictures of different quality.Good results in both accuracy and efficiency.
Extraction and Automatic Classification of TCM Medical Records Based on Attention Mechanism of BERT and Bi-LSTM
DU Lin, CAO Dong, LIN Shu-yuan, QU Yi-qian, YE Hui
Computer Science. 2020, 47 (11A): 416-420.  doi:10.11896/jsjkx.200200020
Abstract PDF(2361KB) ( 1694 )   
References | Related Articles | Metrics
The development of traditional Chinese medicine has gradually become a hot topic,among which the medical records of traditional Chinese medicine contain huge and valuable medical information.However,in terms of the text mining and utilization of TCM medical records,it is always difficult to extract effective information and classify them.To solve this problem,it is of great clinical value to study a method of extracting and automatically classifying TCM medical records.This paper attempts to propose a short medical record classification model based on BERT+ Bi-LSTM +Attention fusion.BERT preprocessing is used to obtain the short text vector as the input of the model,to compare the pre-training effect of BERT and word2vec model,and to compare the effect of Bi-LSTM +Attention and LSTM model.The experimental results show that BERT+ Bi-LSTM +Attention fusion model achieves the highest Average F1 value of 89.52% in the extraction and classification of TCM medical records.Through comparison,it is found that the pre-training effect of BERT is significantly improved compared with that of word2vec model,and the effect of Bi-LSTM +Attention model is significantly improved compared with that of LSTM model.Therefore,the BERT+ Bi-LSTM +Attention fusion model proposed in this paper has certain medical value in the extraction and classification of medical records.
Study on Electric Vehicle Price Prediction Based on PSO-SVM Multi-classification Method
LI Bao-sheng, QIN Chuan-dong
Computer Science. 2020, 47 (11A): 421-424.  doi:10.11896/jsjkx.191200132
Abstract PDF(1961KB) ( 856 )   
References | Related Articles | Metrics
With the promotion of new energy vehicles,electric vehicles have gradually entered thousands of households.There are many factors that affect the price of electric vehicles.Twenty attributes that affect the price of electric vehicles are studied by principle component analysis.First of all,the data are preprocessed by Pearson correlation coefficient method and PCA algorithm to obtain more essential sample attributes.Then,the new data are studied by multi-classification supervised learning.Based on the SVM model,the particle swarm optimization algorithm is used to optimize the parameters of the support vector machine model,and the multi-classification research of electric vehicle is realized successfully.The experimental results show that the multi-classification SVM model has significant effect.
Application of Improved DBSCAN Algorithm on Spark Platform
DENG Ding-sheng
Computer Science. 2020, 47 (11A): 425-429.  doi:10.11896/jsjkx.190700071
Abstract PDF(1969KB) ( 915 )   
References | Related Articles | Metrics
Aiming at the problem of high memory occupancy of DBSCAN(Density-Based Spatial Clustering of Applications with Noise) clustering algorithm,this paper combines the improved DBSCAN clustering algorithm with the parallel clustering calculation theory of Spark platform,and the clustering and processing methods for massive data are clustered,which greatly reduces the memory usage of the algorithm.The experimental simulation results show that the proposed parallel computing method can effectively reduce the shortage of memory,and it also can be used to evaluate the clustering effect of the DBSCAN clustering algorithm on the Hadoop platform,and compare and analyze the twoclustering methods to obtain better computing performance.Besides,the acceleration is increased by about 24% compared with that on the Hadoop platform.The proposed method can be used to evaluate the pros and cons of the DBSCAN clustering algorithm in clustering.
User Importance Evaluation for Q&A Platform Based on User Relations
LI Xiao, QU Yang, LI Hui, GUO Shi-kai
Computer Science. 2020, 47 (11A): 430-436.  doi:10.11896/jsjkx.200500024
Abstract PDF(2523KB) ( 657 )   
References | Related Articles | Metrics
Q&A has increasingly become an important platform of acquiring knowledge for WWW users.As the number of the users rapidly increases,the identification of the important users becomes more and more difficult,and more and more questions cannot be answered in Q&A platforms.Thus it seriously affects the user experience.Aiming to solve this problem,we regard the questions and answers of users in the Q&A platform as a kind of social network behavior,and build a user relationship network based on these behaviors.On this basis,we present an evaluation of user importance ranking based on the user relationship network,and further identify the important users of the platforms.Experimental studies based on data set of Stack Overflow show that,the results produced by the user important ranking is consistent with the actual ranking lists,and the produced ranking results are relatively stable.Furthermore,the ranking results can be used for improving the question recommendation.Applying the user importance ranking measurement,we designed and developed a Q&A platform.Empirical studies show that this ranking method can identify the important users from Q&A platform,and improve the user experience of knowledge acquirement.
Tax Prediction Based on LSTM Recurrent Neural Network
WEN Hao, CHEN Hao
Computer Science. 2020, 47 (11A): 437-443.  doi:10.11896/jsjkx.200300091
Abstract PDF(2515KB) ( 995 )   
References | Related Articles | Metrics
Analyzing the hidden relationship between historical tax data and using mathematical models to predict future tax revenue is the focus of tax forecast research.A tax prediction model of long short-term memory (LSTM) recurrent neural network combined with wavelet transform is proposed in this paper.Combining wavelet transform on data preprocessing to remove noise from tax data and improve the generalization ability of the model.The LSTM neural network can better learn the correlation between historical tax data by adding hidden neural units and gated units,and further extract valid state innovations between input sequences,and overcome the long-term dependency problem of recurrent neural networks.Experimental results show that the encoder-decoder structure based on the LSTM neural network can enhance the time step of tax prediction.Compared with the single-step sliding window LSTM neural network model and the gray model based on difference differential equations in the long-term tax prediction,the model and the regression-based autoregressive moving average model (ARIMA) significantly improve the prediction accuracy.
Dominating Set Algorithm for Graphs Based on Vertex Order
WANG Hong, GUANG Li-he
Computer Science. 2020, 47 (11A): 444-448.  doi:10.11896/jsjkx.200300023
Abstract PDF(1807KB) ( 742 )   
References | Related Articles | Metrics
This paper introduces the attribute order in rough set theory into graph theory,and studies the dominating set problem of undirected graphs based on vertex order.First,a total order relation is defined on the vertex set of graph,which is called vertex order.Then,a binary equivalence relation is defined by using the vertex order,and a partition of closed sets of all vertices in graph is obtained.Finally,based on the partition,a minimal dominating set algorithm for graphs under vertex order is designed.At the same time,it proves the completeness and uniqueness of this algorithm for solving the minimum dominating set under a given vertex order.An example is used to show the correctness and effectiveness of the algorithm.
Community Detection in Signed Networks with Game Theory
WANG Shuai-hui, HU Gu-yu, PAN Yu, ZHANG Zhi-yue, ZHANG Hai-feng, PAN Zhi-song
Computer Science. 2020, 47 (11A): 449-453.  doi:10.11896/jsjkx.200200049
Abstract PDF(2631KB) ( 891 )   
References | Related Articles | Metrics
As a meso-scale feature of complex networks,community structure is of great significance for understanding the structure and property of networks.Unlike unsigned networks,signed networks include positive and negative edges,which represent friendly and hostile relations,respectively.When forming a community,a node usually chooses to be in the same community with friends,but in different communities with enemies.Based on this idea,a game theory model for community detection in signed networks is constructed,and a related algorithm is designed.Experimental results show that the algorithm performs well inidenti-fying non-overlapping and overlapping communities.In addition,the efficiency of the algorithm has been verified,and an optimization method,which can effectively improve the efficiency of the proposed algorithm,is proposed.
Study on XGBoost Improved Method Based on Genetic Algorithm and Random Forest
WANG Xiao-hui, ZHANG Liang, LI Jun-qing, SUN Yu-cui, TIAN Jie, HAN Rui-yi
Computer Science. 2020, 47 (11A): 454-458.  doi:10.11896/jsjkx.200600002
Abstract PDF(1633KB) ( 1900 )   
References | Related Articles | Metrics
Regression prediction is one of the important research directions in machine learning and has a broad application field.In order to improve the accuracy of regression prediction,an improved XGBoost method (GA_XGBoost_RF) based on genetic algorithm and random forest is proposed.Firstly,with the good search ability and flexibility of Genetic Algorithm (GA),the XGBoost Algorithm and Random Forest Algorithm (RF) parameters are optimized with the average score of cross-validation as the objective function value,and the better parameter set is selected to establish GA_XGBoost and GA_RF models,respectively.Then the variable weight combination of GA_XGBoost and GA_RF is performed.The mean square error between the predicted value and the real value of the training set is used as the objective function,and the weight of the model is determined by genetic algorithm.On UCI data sets and the results show that the XGBoost and Random Forest,GA_XGBoost,GA_RF algorithm compared to GA_XGBoost_RF method in most of the data set is the fit of the mean square error (mse) and absolute error and are superior to single model,the proposed method on fitting on different data sets improves by about 0.01%~2.1%,is a kind of effective regression forecast method.
Mining Trend Similarity of Multivariate Hydrological Time Series Based on XGBoost Algorithm
DING Wu, MA Yuan, DU Shi-lei, LI Hai-chen, DING Gong-bo, WANG Chao
Computer Science. 2020, 47 (11A): 459-463.  doi:10.11896/jsjkx.200500128
Abstract PDF(2419KB) ( 1285 )   
References | Related Articles | Metrics
In view of the shortcomings of the traditional hydrological trend prediction using neural networks and other tools,the results are not interpretable and so on.This paper proposes a method of hydrological trend prediction based on machine learning algorithms,which aims to use the XGBOOST machine learning algorithm to establish a similarity mapping model for each hydrological feature between the reference period and the hydrological prediction period,thus,the most similar sequence to the hydrological trend in the foreseeing period is matched in the historical hydrological time series,so as to achieve the purpose of hydrological trend prediction.In order to prove the efficiency and feasibility of the proposed method,it was verified with the Taihu hydrological time series data as the object.The analysis results show that the multi-variable hydrological time series trend simila-rity analysis based on machine learning can meet therequirements of dispatchers for the prediction effect of future hydrological trends.
Prediction Method of Flight Delay in Designated Flight Plan Based on Data Mining
ZHANG Cheng-wei, LUO Feng-e, DAI Yi
Computer Science. 2020, 47 (11A): 464-470.  doi:10.11896/jsjkx.200600001
Abstract PDF(3666KB) ( 1473 )   
References | Related Articles | Metrics
In view of the fact that the existing flight delay prediction methods are rarely analyzed from the perspective of the de-signated flight plan delay prediction,a prediction method to study the delay situation of a specified flight plan in the departure flight plan is proposed.First,analyzing the intrinsic characteristics of a large number of historical flight data mining data.Secondly,this research employs Dynamic Bayesian Network inference as the main modeling method to obtain the probability distribution under different conditions of flight delay.By studying the Dynamic Bayesian Network inference process and simulation,this paper presents a new method for the construction of the flight delay prediction model which is to establish Hidden Markov flight delay prediction model based on the real flight data.Using the Viterbi algorithm of Hidden Markov model decoding problem to predict the flight delay time.Finally,taking an airline's full-year flight operation data as an example for example simulation and verification,the results show that this method improves the accuracy of flight delay prediction objects.
Balance Between Preference and Universality Based on Explicit Feedback Collaborative Filtering
HUANG Chao-ran, GAN Yong-shi
Computer Science. 2020, 47 (11A): 471-473.  doi:10.11896/jsjkx.200600109
Abstract PDF(1756KB) ( 612 )   
References | Related Articles | Metrics
Collaborative filtering (CF) based on explicit feedback only exists three variables,and its similarity computing method depends on the explicit feedback of user's rating data,but never considers the implicit factors existed in the real word's recommendation,which determines that CF is limited in mining the preference of users and items,but it lacks of the abilities of mining the universality of users and items[5].Academia has proposed various of innovative ideas to improve the traditional CF,but most of improvements are vertical improvements for CF algorithm like adding the mechanisms of classification,clustering and time series to the algorithm,which improve the algorithm structure but barely improve the variable factors.Therefore,it still cannot mining the universality of users and items deeply.This paper proposes a horizontal improvement:Collaborative Filtering & Regression Weighted Average (CRW),intending to mine the universality of users and items through tree regression while keeping the preference of users and items through CF,and conducting weighted average between the predicting result of regression and CF,in order to balance the strength of preference and the weakness of universality of CF.Experiment result shows that with a proper weighting coefficient a,the mean square error of predicting result of CRW is distinctly lower than that of CF and regression,which shows CRW performs better than single CF and regression.
Feature Selection Method Combined with Multi-manifold Structures and Self-representation
YI Yu-gen, LI Shi-cheng, PEI Yang, CHEN Lei, DAI Jiang-yan
Computer Science. 2020, 47 (11A): 474-478.  doi:10.11896/jsjkx.200100037
Abstract PDF(2649KB) ( 704 )   
References | Related Articles | Metrics
Feature selection is to reduce the dimension of data by removing irrelevant and redundant features and improve the efficiency of learning algorithm.Unsupervised feature selection has become one of the challenging problems in dimensionality reduction.Firstly,combining self-representation and manifold structure of features,a Joint Multi-Manifold Structures and Self-Representation (JMMSSR) unsupervised feature selection algorithm is proposed.Different from the existing approaches,our approach designs an adaptive weighted strategy to integrate multi-manifold structures to describe the structure of features accurately.Then,a simple and effective iterative updating algorithm is proposed to solve the objective function,and the convergence of the optimization algorithm is also verified by numerical experiments.Finally,experimental results on three datasets (such as JAEEF,ORL and COIL20) show that the proposed approach exhibits better performance than the existing unsupervised feature selection approaches.
Service Recommendation Algorithm Based on Canopy and Shared Nearest Neighbor
SHAO Xin-xin
Computer Science. 2020, 47 (11A): 479-481.  doi:10.11896/jsjkx.200200031
Abstract PDF(2333KB) ( 718 )   
References | Related Articles | Metrics
In order to improve the accuracy of banking institution service recommendation,a clustering algorithm based on the improved Canopy and shared nearest neighbor similarity is proposed.Based on this algorithm,users are subdivided and accurate service recommendation is made according to the characteristics of user groups.First,the improved Canopy algorithm is used to obtain the initial clustering results.Then the shared nearest neighbor similarity algorithm is used to classify the intersecting data in the clustering results.Finally,the user clustering data are obtained.The algorithm is applied to the real customer data of a bank.Three indexes of customer contribution,loyalty and activity are selected for clustering.The results show that the algorithm improves the quality of customer segmentation and the efficiency of clustering.The result of clustering is very accurate in describing the consumption data of customers.Clustering results can provide data support for accurate service recommendation of banks.
Bipartite Network Recommendation Algorithm Based on Semantic Model
ZHOU Bo
Computer Science. 2020, 47 (11A): 482-485.  doi:10.11896/jsjkx.200400028
Abstract PDF(1956KB) ( 717 )   
References | Related Articles | Metrics
The current research of bipartite network recommendation algorithm does not consider the semantic relationship,so this paper proposes an improved bipartite network recommendation algorithm.Author topic model (AT model) is used to embed the semantic information into a two dimensions semantic space.Then the semantic similarity between the recommended objects is calculated and integrated into the similarity calculation of bipartite network recommendation algorithm.The algorithm is verified by the recommendation of the new energy vehicle patentee.Experimental results show that the new algorithm has higher accuracy and recall rate than the bipartite network recommendation algorithm,the accuracy rate is increased by 2.29%,the recall rate is increased by 4.15%.
BP Neural Network Water Resource Demand Prediction Method Based on Improved Whale Algorithm
MA Chuang, ZHOU Dai-qi, ZHANG Ye
Computer Science. 2020, 47 (11A): 486-490.  doi:10.11896/jsjkx.191200047
Abstract PDF(2103KB) ( 1129 )   
References | Related Articles | Metrics
With the increasing concentration of modern residential areas and the continuous expansion of water supply network,water supply is facing new difficulties and challenges.It includes the dynamic change of water resource scheduling,the sudden breakdown of pipe network,the uncontrollable loss of water resources,multi-objective and huge calculation.BP neural network has been widely used in water resources prediction because of its strong self-learning ability and generalization ability,but it also has the problems of slow convergence and easy to fall into local extremes.As a kind of optimization algorithm,swarm intelligence algorithm has simple operation,fast convergence speed and strong global optimization ability.In order to improve the convergence speed and prediction accuracy of BP neural network in water resources prediction,a BP neural network water resource demand prediction model based on the optimization of improved whale algorithm is proposed.The optimization breadth and accuracy of the algorithm are strengthened,and then the optimal weights and thresholds output by the improved WOA algorithm are used as initial parameter values to train the model through BP neural network.Through experimental verification,the improved WOA-BP neural network method has better performance in terms of convergence speed and prediction accuracy than the traditional WOA-BP method.
Short-term Trend Forecasting of Stocks Based on Multi-category Feature System
WANG Ting, XIA Yang-yu-xin, CHEN Tie-ming
Computer Science. 2020, 47 (11A): 491-495.  doi:10.11896/jsjkx.200100055
Abstract PDF(2097KB) ( 1496 )   
References | Related Articles | Metrics
With the rapid development of economy and technology,the stock market has become an important part of the current financial market.Traditional machine learning methods have limitations in processing stock prediction problems with nonlinearization,high-noise or strong volatility.In recent years,the rise of deep neural networks has provided new solutions to stock trend forecasting problems.In this paper,longshort-term memory network (LSTM) is used to deal with long-distance stock temporal problems,and a multi-category feature system is constructed as the input for long-term and short-term memory networks for training,including common technical indicators,multiple key features,and real event information for individual stocks.Meanwhile,the experimental part comprehensively analyzes the effectiveness of various characteristics for stock trend prediction,and the comparison results show that the multi-category characteristic system performs well in the prediction,and can reach a short-term forecast accuracy of 68.77%.In addition,LSTM is compared with other models such as convolutional neural network (CNN),recurrent neural network (RNN) and multilayer perceptron (MLP).Experimental results show that LSTM is superior to other models in solving this problem.
Analysis and Forecast of Some Climate Indexes in Main Producing Areas of Yunnan Province Based on Multiple Models
CHEN Pei, ZHENG Wan-bo, LIU Wen-qi, XIAO Min, ZHANG Ling-xiao
Computer Science. 2020, 47 (11A): 496-503.  doi:10.11896/jsjkx.200200059
Abstract PDF(3682KB) ( 751 )   
References | Related Articles | Metrics
In view of the lack of prediction models and modeling methods of crop planting and climate index in Yunnan Province,firstly,the research status of data analysis and prediction models of main climatic factors such as precipitation,temperature and air humidity are summarized.The comprehensive relationship between temperature,rainfall,humidity and agroclimatic resources are analyzed,the data are cleaned,and the main analysis indexes are selected.Secondly,the precipitation,temperature and air humidity model of Yunnan Province are analyzed by using the data of 30 years from 1981 to 2010.Thirdly,the Matlab Curve Fitting Tool fitting function is used to predict the climate,and the prediction model of the climate index in the selected area is obtained and the prediction error is calculated,and the numerical fitting error analysis is carried out.Finally,the ARIMA model is established by SPSS software,which is used as a supplement to the above-mentioned model.Through experimental verification,the prediction error of model 90% is successfully controlled within 10%.Through this study,a model for analyzing and predicting some climate indexes in the main producing areas of Yunnan Province is established,which plays a guiding role in the regional planning of crop planting in Yunnan Province.
Incremental FFT Based on Apache Storm and Its Application
ZHAO Xin, MA Zai-chao, LIU Ying-bo, DING Yu-ting, WEI Mu-heng
Computer Science. 2020, 47 (11A): 504-507.  doi:10.11896/jsjkx.191000086
Abstract PDF(2795KB) ( 780 )   
References | Related Articles | Metrics
The conventional Fast Fourier Transform which is difficult to process the industrial big data in real time is a stand-alone algorithm with the batch processing techniques.In this paper,an Incremental FFT based on Apache storm is proposed.A non-recursive computational logic is first designed in Apache Storm.Then,a rotor misalignment experiment is performed on a Bently rotor test bench.With the rotor vibration data,a visual monitoring interface is developed by DataWay Framework.The result shows that the frequency spectrum of the stream data can be updated in real time with the proposed method and its realization.
Multimodal Sentiment Analysis Based on Attention Neural Network
LIN Min-hong, MENG Zu-qiang
Computer Science. 2020, 47 (11A): 508-514.  doi:10.11896/jsjkx.191100041
Abstract PDF(2587KB) ( 2270 )   
References | Related Articles | Metrics
In recent years,more and more people are keen to express their feelings and opinions in the form of both pictures and texts on social media,and the scale of multimodal data including images and texts keeps growing.Compared with single mode data,multimodal data contains more information.It can better reveal the real emotion of users.Sentiment analysis of these huge amounts of multimodal data helps to better understand people's attitudes and opinions.In addition,it has a wide range of applications.In order to solve the problem of information redundancy in multimodal sentiment analysis task,this paper proposes a multimodal sentiment analysis method based on tensor fusion scheme and attention neural network.This method constructs the text feature extraction model and image feature extraction model based on attention neural network to highlight the key areas of image emotion information and words containing emotion information,so as to make the expression of each feature more concise and accurate.It fuses each modal feature using tensor fusion method in order to obtain the joint feature vector.Finally,it uses support vector machine for sentiment classification.The experimental results of this model on two real Twitter data sets show that compared with other sentiment analysis models,this method has a great improvement in precision rate,recall rate,F1 score andaccuracy rate.
Study on Learning to Rank Based on Tensor Decomposition in Personalized Tag Recommendation
YANG Yang, DI Yi-de, LIU Jun-hui, YI Chao, ZHOU Wei
Computer Science. 2020, 47 (11A): 515-519.  doi:10.11896/jsjkx.191100181
Abstract PDF(1987KB) ( 757 )   
References | Related Articles | Metrics
The use of tags provides a way for the system to divide and manage users and items,while personalized tag recommendations not only facilitate users input,but also help to improve the quality of system tags.In turn,the system can obtain more information about users and items,improve the accuracy of subsequent recommendations,improve the user experience.Therefore,it plays an important role in similar business scenarios such as Taobao and Didi.However,most existing tag recommendations do not pay attention to the ranking issues in the recommendation list.The tag that is too late in the list is easy to lose the opportunity for user use,resulting in the lack of information about users and items,and hindering the subsequent accurate recommendation.Aiming at the above problems,a personalized tag recommendation algorithm based on tensor Tucker decomposition and list-wise learning to rank is proposed.The algorithm is trained by optimizing MAP,and the simulation experiment is carried out on Last.fm dataset,which not only verified the effectiveness of the algorithm,but also fully explored the influence of learning rate,the dimension of core tensor and other parameters on the algorithm.Experimental results show that the algorithm can optimize the ranking problem of the recommendation list greatly,and its performance decreases linearly with the increase of the length of the list.The implementation of the algorithm is conducive to better recommendation services according to the user preferences.
Study on Impact Assessment Model of Enterprise Data Application
YUE Wen-jiao, LI Peng, WEN Jun-hao, XING Bin
Computer Science. 2020, 47 (11A): 520-523.  doi:10.11896/jsjkx.200200062
Abstract PDF(2263KB) ( 1234 )   
References | Related Articles | Metrics
Aiming at the problems of low data utilization rate and difficult data quality assessment,considering Chinese enterprise data governance and application requirements,in conjunction with the US RMDS laboratory,from the perspective of enterprise data application,the data science assessment dimension is creatively added,and compatible with existing mainstream assessments is proposed Enterprise data impact assessment model (DIAM) framework that is based on the model and better meets the needs of Chinese enterprises.Considering that the existing DIAM model has not yet proposed a specific and feasible evaluation method,based on the model framework research,this paper studies the evaluation method and rating strategy for the DIAM model.To carry out the research,first,this paper uses an improved analytic hierarchy process to calculate the weights of 240 evaluation indicators covering the four dimensions of the DIAM model,covering the three dimensions of data top-level design,data science,and data management.Then,based on the weight calculation,it researches Top-down model evaluation method.Further,it proposes a five-level data impact evaluation level,and defines a comprehensive rating adjustment strategy for the rating results.Through analysis,the improved DIAM model can be applied to enterprise data application impact evaluation for enterprises,and provides a scientific basis for enterprise data governance and application capability.
Mining Nuclear Medicine Diagnosis Text for Correlation Extraction Between Lesions and Their Representations
HAN Cheng-cheng, LIN Qiang, MAN Zheng-xing, CAO Yong-chun, WANG Hai-jun, WANG Wei-lan
Computer Science. 2020, 47 (11A): 524-530.  doi:10.11896/jsjkx.200400062
Abstract PDF(2871KB) ( 854 )   
References | Related Articles | Metrics
Medical imaging is an indispensable part of the diagnosis and treatment of diseases in modern clinical medicine.SPECT is the main functional imaging technology and has been widely used in the diagnosis and treatment of diseases such as tumor bone metastasis.The SPECT diagnostic text contains several aspects of patients' personal information,image description,and suggested results.In order to accurately extract the association between disease and its representation in the diagnostic text of SPECT nuclear medicine bone imaging,a method of mining association rules of nuclear medicine text based on data mining is proposed.Firstly,a method of SPECT medical diagnostic text preprocessing and uniform coding is proposed to solve the problems of information redundancy,data loss and inconsistent expression.Secondly,the classical association rule mining algorithm Apriori is applied to propose the association mining algorithm between lesions and their representations.Finally,the proposed method is validated with a set of real-world SPECT nuclear medical diagnostic text data from the department of nuclear medicine in a 3a grade hospitals,and the results show that the proposed method is able to objectively extracted the association between the disease and its representation,and the average objectivity is more than 90%.
Using ARIMA Model to Predict Green Area of Park
YAN Xiang-xiang
Computer Science. 2020, 47 (11A): 531-534.  doi:10.11896/jsjkx.200300099
Abstract PDF(2513KB) ( 2046 )   
References | Related Articles | Metrics
Using ARIMA model in time series is one of the common analysis and prediction methods.In order to predict the green area of the park,in the case where the advantages of other prediction models are not obvious,the ARIMA model is finally selected as the prediction method.The data of landscaping and forestry in Beijing from 1978 to 2017 are surveyed and collected.In the SPSS system,through the steps of data selection,descriptive statistical analysis,autocorrelation graph stationarity test,data stationarity processing,model test,etc.,the ARIMA model suitable for data collection is finally determined,and to predict the green area of the park.Experimental results such as visualization and model statistics show that the model fits and predicts well.
Multi-scale Convolutional Neural Network Air Quality Prediction Model Based on Spatio-Temporal Optimization
ZHOU Jie, LUO Yun-fang, LEI Yao-jian, LI Wen-jing, FENG Yu
Computer Science. 2020, 47 (11A): 535-540.  doi:10.11896/jsjkx.200700164
Abstract PDF(2319KB) ( 1319 )   
References | Related Articles | Metrics
At present,the air quality prediction is mainly based on the time series of a single station,without considering the influence of the spatial characteristics on the air quality.To solve this problem,a multi-scale neural network (MSCNN-GALSTM) model based on spatiotemporal optimization is proposed for air quality prediction.one-dimensional multi-scale convolution kernel (MSCNN) is used to extract the local temporal and spatial characteristic relations in air quality data,the LINEAR SPLICING and fusion are carried out to obtain the space-time characteristic relation of multi-sites and multi-features,combine the advantage of long-short memory network (LSTM) to process time series,and introduce genetic algorithm (Ga) to optimize the parameter set of LSTM network globally,the time-space relationship of multi-site and multi-feature is input into the LSTM network,and then the long-term feature dependence of multi-site and multi-feature is output.Finally,the MSCNN-GALSTM model was compared with the single LSTM reference model and the single scale convolutional neural network model.The root mean square error (RMSE) decreased by about 11% and the average prediction accuracy increased by about 20%.The results show that the MSCNN-GALSTM model has more comprehensive feature extraction,deeper level,higher prediction accuracy and better generalization ability.
Software Engineering
Analysis of Impact of Open Source Components in Mixed Source Software Projects
ZHAO Liang
Computer Science. 2020, 47 (11A): 541-543.  doi:10.11896/jsjkx.200400077
Abstract PDF(1699KB) ( 1152 )   
References | Related Articles | Metrics
This paper studies the code structure characteristics of mixed source software,according to four standards,which include function knowledge,code usefulness,code security and intellectual property rights.This shows the unique code space of mixed source code.This paper analyzes the positive and negative effects of open source components on the progress,quality,cost and intellectual property rights of mixed source projects.The licenses are divided into three types according to their infectivity.Through the case study of open source component in the safety critical field software project,the basic situation of open source application is shown,and the problems existing in the practice are analyzed.Based on the above research,this paper brings forward that the whole life cycle management mechanism of open source components,and the innovation based on open source should be strengthened,and encourage integration and feedback to the open source community,from the technical view,we should make careful component selection,strengthen product development process management in the early stage of the project,and closely follow the open source community to product evolution.These can help to make better use of open source component and promote software mixed source project management.
API Recommendation Model with Fusion Domain Knowledge
LI Hao, ZHONG Sheng, KANG Yan, LI Tao, ZHANG Ya-chuan, BU Rong-jing
Computer Science. 2020, 47 (11A): 544-548.  doi:10.11896/jsjkx.191200010
Abstract PDF(1755KB) ( 1193 )   
References | Related Articles | Metrics
Application Programming Interfaces (API) play an important role in modern software development,and developers often need to search for the appropriate API for their programming tasks.However,with the development of the information industry,API reference documents have become larger and larger,and traditional search methods have also caused inconvenience to engineers' queries because of redundant and erroneous information on the Internet.At the same time,due to the vocabulary and knowledge gap between the natural language description of programming tasks and the description in the API documentation,it is difficult to find a suitable API.Based on these issues,this paper proposes an algorithm called ARDSQ (Recommendation base on Documentation and Solved Question) which is an API recommendation algorithm that integrates domain knowledge.ARDSQ can retrieve the closest API in the knowledge base based on the natural language description given by the engineer.Experiments show that,compared with two advanced API recommendation algorithms(BIKER,DeepAPILearning),ARDSQ has greater advantages in the key evaluation index (Hit-n,MRR,MAP ) of the recommendation system.
Study on Reverse Engineering Generation Method of Software Evolution History
ZHONG Lin-hui, FU Li-juan, YE Hai-tao, QI Jie, XU Jing
Computer Science. 2020, 47 (11A): 549-556.  doi:10.11896/jsjkx.200200067
Abstract PDF(3479KB) ( 768 )   
References | Related Articles | Metrics
For the better management of software evolution,more and more software evolution management models have been proposed.However,most of the software are stored in the models in the unit of files or projects,and the models lack of the evolution history information of those software components,which make the evolution process hard to be intuitively and effectively understood and managed.In this paper,software evolution binary tree is defined to express the evolution history of software and its components.And a method to recover the evolution binary tree of software and its components by software architecture reverse technology is proposed as well.First of all,the (atomic) components of software system and software architecture (taken as a special composite component here) are recovered by software source codes and architecture reverse technology,and the multi-dimensional attributes of the corresponding atomic components and that of the composite components are measured,based on which the software evolution histories are constructed by the evolution binary tree construction algorithm.Finally,after analyzing the main factors that affect the construction of evolution binary trees according to two groups of experiment,some evolution binary trees are generated according to similarity thresholds with different attributes and some are generated in the basis of the combinations of different attributes by Bunch and ACDC (architecture reverse tools) respectively.Through the experiments of four open source software (Cassandra,Hbase,Hive,Openjpa,Zookeeper,RxJava,Groovy,Sqoop),the best similarity thresholds affecting the construction of evolution binary trees and the best attribute combinations suitable for the software are realized.And it's also can be seen that the evolution binary trees of composite components recovered are extremely similar to their corresponding real trees in framework.And using the architecture reverse tool ACDC to restore the evolutionary binary tree has higher accuracy.Consequently,the proposed method is effective to recover the evolution histories of these open source software and that of their components.
Hierarchical Classification Model for Metamorphic Relations of Scientific Computing Programs
YANG Xiao-hua, YAN Shi-yu, LIU Jie, LI Meng
Computer Science. 2020, 47 (11A): 557-561.  doi:10.11896/jsjkx.200200015
Abstract PDF(2621KB) ( 684 )   
References | Related Articles | Metrics
Metamorphic testing is an effective way to solve the Oracle test problem,the key of which is the discovery of metamorphic relations.By analyzing the research and development process of scientific computing programs,this paper puts forward the concepts of physical model metamorphic relations,computing model metamorphic relations and code model metamorphic relations,defines the hierarchical structure of three kinds of metamorphic relations,establishes the hierarchical classification model of metamorphic relations,and discusses its application prospect in the research of discovery method on metamorphic relations.
SQL Grammar Structure Construction Based on Relationship Classification and Correction
WAN Wen-jun, DOU Quan-sheng, CUI Pan-pan, ZHANG Bin, TANG Huan-ling
Computer Science. 2020, 47 (11A): 562-569.  doi:10.11896/jsjkx.200200086
Abstract PDF(2293KB) ( 708 )   
References | Related Articles | Metrics
Aiming at the problem that the SQL grammar structure in nested query is difficult to construct,the GSC-RCC method combining relation classification and modification is proposed,and the SQL grammar is represented by three types of entity relationships.Firstly,the relational classification depth model is designed,and the column name common words are introduced to improve the performance of the model,so as to determine the probability of different relations of each entity pair in the statement,and then generate unmodified undirected graph.Then the relationship correction algorithm based on SQL grammar is designed to modify the undirected graph and finally construct the SQL grammar structure.In the real estate data query task,for multi-conditional query statements with nested conditions,the grammar structure generation accuracy of GSC-RCC method is 92.25%,and the method can reduce the dependence of the model on the number of statement sample.
Study on Mapping Transformation from Geometric Aviation Data to Relational Database
LAI Xin, ZENG Ji-wei
Computer Science. 2020, 47 (11A): 570-572.  doi:10.11896/jsjkx.200400040
Abstract PDF(2063KB) ( 704 )   
References | Related Articles | Metrics
The aeronautical data exchange model based on GML is the foundation of future aeronautical information management.Aeronautical information service system currently used is mainly based on the relational model database.On the transitional stage,the expression of aeronautical information data with GML standard geometric characteristics in the relational database must be considered.The structural characteristics of geometric aeronautical data in the aeronautical information exchange model and the differences among geometric data representations in a relational database are analyzed.The feasibility of mapping between the two types of data is indicated.And then a mapping scheme of geometric aviation data to a relational database is proposed,and available technologies such as Linq to XML are used to verify the feasibility of the mapping scheme.
Interdiscipline & Application
Construction of Mathematics Course Knowledge Graph and Its Reasoning
ZHANG Chun-xia, PENG Cheng, LUO Mei-qiu, NIU Zhen-dong
Computer Science. 2020, 47 (11A): 573-578.  doi:10.11896/jsjkx.191200141
Abstract PDF(2138KB) ( 2495 )   
References | Related Articles | Metrics
The construction of course knowledge graph has become an important research content in the fields of knowledge graph,E-learning and knowledge service and so on.This paper takes mathematics courses as the research object,constructsmathe-matics course ontology (MCO),designs a method of building mathematics course knowledge graph (MCKG) in terms of mathematics course ontology,and proposes an approach of knowledge reasoning founded on MCKG.The characteristics of MCO are that it includes mathematics course top-level ontology,mathematics course content ontology,and mathematics course exercise ontology.Mathematics course top-level ontology is to depict shared conceptualizing knowledge of different mathematics courses.Mathematics course content ontology is to describe knowledge of specific courses,while mathematics course exercise ontology is to depict intensions and properties of exercises of mathematics courses.The traits of MCKG are that hierarchical fusion of basic model and extended model,introduction of positive instances and negative instances of concepts,and organic integration with mathematics course content ontology.The characteristic of knowledge inference based on MCKG is that the taxonomy of infe-rence types is built.This taxonomy gives types of inference knowledge,and location and associated relationships in MCKG from the point view of ontology.The experiments about the discrete mathematics course show the validity of the proposed knowledge graph construction and reasoning methods.The mathematics course knowledge graph and its reasoning provide a formal explicit model of course knowledge representation,organization,and reasoning for users,and can improve knowledge service effects.
Design and Implementation of Neurofeedback Intervention System Based on Unity
HE Yan, ZHANG Chen-yang
Computer Science. 2020, 47 (11A): 579-583.  doi:10.11896/jsjkx.200700153
Abstract PDF(2123KB) ( 1192 )   
References | Related Articles | Metrics
The plastic brain represents its cognition recovery capability from injury,and neurofeedback training can improve this recovery process efficiently.Due to the side effects including drug resistance,neurofeedback-based brain rehabilitation training attracts much attention recently in the field of brain research.As an effective treatment for clinical brain diseases such as autism and attention deficit hyperactivity disorder,brain regulation and modulation can gradually restore its cognitive function through advanced cognitive task training.There are various kinds of individual cognitive training.In this paper,the Unity3D game engine is applied to design and develop a neurofeedback therapy system which is called Speeding.Firstly,this system introduces the neural feedback mechanism apart from traditional thinking training,and the difficulty of training will be adjusted timely according to the condition detection of the brain,therefore it enhances the purpose and pertinence of brain training and then the training efficiency could be improved.Secondly,this system adopts the map and gem props to carry out multi-task thinking training,which makes the player pay closer attention so as to improve the training effect.By way of accomplishing the design of multiple scenarios corresponding to gradient training difficulties,this system enables improving the working condition of the brain as well as providing technical support for the rehabilitation and treatment of the cerebral nervous system.
Method for Transforming Directed Acyclic Graph into Algebraic Expression Tree
LI Hong-yu, WANG Yu-xin
Computer Science. 2020, 47 (11A): 584-590.  doi:10.11896/jsjkx.200200066
Abstract PDF(1792KB) ( 1241 )   
References | Related Articles | Metrics
This paper presents a method for transforming a directed acyclic graph into an algebraic expression tree.This method can achieve series merging,parallel merging,and serialization merging of graphs,and it can handle functional vertices in graphs.Compared with the previous transformation methods,the transformation given in this paper can deal with moretypes of graphs and vertices,so it is more widely used.This article gives the conversion method and analyzes the running time of the conversion.Considering the actual application situation,the conversion time is only related to the number of edges in agraph,so the conversion time efficiency is high.
Determination of Evidence Weight Coefficient and D-S Combination Research Based on GlobalConflict Coefficient
XU Jiang-jun, PENG Xu, LYU Wei, LIU Xiao-han
Computer Science. 2020, 47 (11A): 591-592.  doi:10.11896/jsjkx.200500042
Abstract PDF(1484KB) ( 665 )   
References | Related Articles | Metrics
To modify the evidence source reasonably and deal with the problem arising in using D-S combination rule for the combination of highly conflicting evidences,this paper proposes a new evidence weight determination method.Firstly,a global conflict coefficient of the evidence is calculated by local conflict and similarity between evidence.Secondly,the inverse value of the global conflict coefficient is taken as the evidence weight and the weight is used to reallocate the probability of the original evidence.Finally,the modified evidence is combined.The results show that the method is superior to other methods.
Study on Simulation Optimization of Gazebo Based on Asynchronous Mechanism
ZENG Lei, LI Hao, LIN Yu-fei, ZHANG Shuai
Computer Science. 2020, 47 (11A): 593-598.  doi:10.11896/jsjkx.200300131
Abstract PDF(2519KB) ( 1423 )   
References | Related Articles | Metrics
In the process of large-scale robot simulation,in order to ensure the accuracy of simulation,the propulsion mechanism based on time step is usually adopted.In this mechanism,the simulation accuracy can be flexibly controlled by adjusting the simulation time step.However,when the simulation scale is large,a large number of plug-in codes for updating posture or state need to be executed in the way of synchronous blocking in each iteration of the simulation cycle,which reduces the performance of the simulation.In order to solve the contradiction between the accuracy and performance of this large-scale robot simulation,an optimization scheme based on asynchronous strategy is proposed,and the optimization scheme is designed and implemented in the popular robot simulator Gazebo.Finally,the validity of the scheme is verified based on the case of the fixed wing of rosflight UAV.The experimental results show that the acceleration ratio of the simulation is over 5.0 after the asynchronous strategy is used to optimize the simulation of 100 fixed wing UAVs.
Designs and Implementations of Algorithms for Searching Terminal Words of Automata
SUN Shi-yuan, HE Yong
Computer Science. 2020, 47 (11A): 599-603.  doi:10.11896/jsjkx.200300096
Abstract PDF(1971KB) ( 666 )   
References | Related Articles | Metrics
The ranks of automata are strongly associated with the problem of the designs of parts orienters on industrial automation and Černý-Pin conjecture on theoretical computer science.Computing ranks of automata can be turned into computing terminal words of automata.Rystsov proposed an algorithm for searching terminal words of automata with O(|A|4) time complexity.The algorithm is the only one to compute terminal words of automata till now.Some new algorithms for searching terminal words of automata can be designed based on existing algorithms for searching synchronizing words of synchronizing automata.Theoretical analysis and experimental results show that all these new algorithms are optimizations of Rystsov algorithm.
Study on New Model of Food Supply Chain Finance Based on Internet of Things+Blockchain
JIN Hui-fang, LYU Zong-wang, ZHEN Tong
Computer Science. 2020, 47 (11A): 604-608.  doi:10.11896/jsjkx.200300140
Abstract PDF(2192KB) ( 1045 )   
References | Related Articles | Metrics
China is a large agricultural country,traditional agriculture has been in the dominant position of the agricultural economy.Food finance is not only related to the development of rural economy,but also has a great relationship with China's food security.There are many problems in traditional grain finance,which largely limit the development and reform of China's agriculture.With the development of information technology,the application of intelligent storage,intelligent warehousing and intelligent logistics has made great progress in food informatization,but it has not been fully utilized.This paper combines blockchain technology,Internet of things technology and related financial means to design a new model of food supply chain finance.The platform can effectively supervise the business activities of enterprises,and solve the problems of insufficient credit and low risk resistance of grain users and grain storage enterprises.Due to the introduction of blockchain technology,the whole process is open and transparent and data sharing,which also ensuring the security of information,providing a new way for the food supply chain financial model.
Data Acquisition System of Industrial Equipment Based on OPC UA
YU Xin-yi, YIN Hui-wu, SHI Tian-feng, TANG Quan-rui, BAI Ji-hua, OU Lin-lin
Computer Science. 2020, 47 (11A): 609-614.  doi:10.11896/jsjkx.200500060
Abstract PDF(2927KB) ( 1433 )   
References | Related Articles | Metrics
In order to solve the problem of data collection and unified monitoring caused by various industrial equipment protocols,a data collection system based on OPC UA is studied.Taking industrial equipments,such as PLC,industrial robot and CNC machine tool as research object.Connection between industrial equipment and local monitoring server is established through industrial Ethernet.On the local monitoring server,different data acquisition drivers and data conversion plug-ins are designed and unified management according to different industrial equipment communication protocols.Based on the OPC UASDK and the XML file generated by the configuration interface,the OPC UA address space is constructed to build the OPC UA server,which is used to store the converted data and interact with the OPC UA client.Meanwhile,the collected converted data is uploaded to the cloud storage system for further data analysis.The system is developed based on .net platform,using C# language and .net framework to build the whole local monitoring server,using WPF to design the monitoring and configuration interface of the local monitoring server.The cloud combines Redis and MySQL to realize the storage of operating data.Finally,the feasibility and integrity of the system are verified through experiments.
Face Image Deduplication Based on Fusion of Face Tracking and Clustering
LIN Zeng-min, HONG Chao-qun, ZHUANG Wei-wei
Computer Science. 2020, 47 (11A): 615-619.  doi:10.11896/jsjkx.200400142
Abstract PDF(2765KB) ( 1026 )   
References | Related Articles | Metrics
Face image deduplication is of great significance to face recognition in intelligent surveillance systems,since face detection in videos will produce a large number of repeated face images.In this paper,a method of face image deduplication in videosby integration of face tracking and clustering is proposed.In a video,use the face detection algorithm in the Multi-task Convolutional Neural Network to extract the face frame and its corresponding coordinates.Face tracking is used to construct the face trajectory and the constraint matrix,and the face quality evaluation algorithm is introduced to select the face pose and image clarity from the face trajectory.An optimal face image is used as a representative of the face trajectory.Combined with the constraint matrix and unsupervised clustering algorithm,the representative images of the faces are clustered to obtain the face image of the same person.Finally,the face image of each person is evaluated again to obtain the deduplication.Experimental results show that,through face tracking and unsupervised clustering,the face image deduplication method in videos can quickly and efficiently obtain high-quality face images that are not repeated for each person from a video.
Study on Key Technologies of Interconnection Based on Smart Home Operating System UHomeOS
CHEN Guo-huo, YIN De-shuai, XU Jing, QIAN Xue-wen, WANG Miao
Computer Science. 2020, 47 (11A): 620-623.  doi:10.11896/jsjkx.200300149
Abstract PDF(1947KB) ( 867 )   
References | Related Articles | Metrics
The rapid development of the smart home industry requires an operating system that can achieve cross-brand smart terminal interconnection.At present,many well-known manufacturers have launched their own smart home operating systems,with the intention of establishing a good application ecology.Haier UHomeOS is the first secure IoT operating system for smart appliances in the world,which covers a complete set of smart home solutions.Based on Haier UHomeOS,this paper explores three key technologies for achieving interoperability,further researches each key technology,and provides solutions.Among them,three key technologies include:equipment distribution network technology,equipment modeling technology and intelligent control technology.In this paper,“multicast MAC technology+softap technology” is used to build a unified device model.In the device model,constraints are set for the operations involving logical relations,and the overall interconnection is realized.This paper also implements intelligent control of home appliances through scenario examples,thereby verifying the reliability of UHomeOS based on intelligent scenario interconnection.
Fast Calculation Method of Aircraft Component Strength Check Based on ICCG
XU Xin-peng, HU Bin-xing
Computer Science. 2020, 47 (11A): 624-627.  doi:10.11896/jsjkx.191100154
Abstract PDF(2319KB) ( 720 )   
References | Related Articles | Metrics
With the requirement of fast diagnosis for reusable aircraft structures,GPU is used as the coprocessor to solve the sparse linear equations with high parallelization and high memory bandwidth.In view of the most time-consuming solution ofli-near equations,the incomplete Cholesky conjugate gradient method is used to verify computing efficiency using wing as an example.The acceleration ratio of GTX1060 graphics card is about 25 times higher than that of E3 1230V5.The results show that the ICCG algorithm based on CUDA can satisfy the relevant diagnostic calculation of the finite element model of aircraft with order less than 60 000.
Application of SFRA Method in AC Servo System
LI Xing-guo, REN Yi-mei, TIAN Jing, TANG Jing-qi
Computer Science. 2020, 47 (11A): 628-631.  doi:10.11896/jsjkx.190600163
Abstract PDF(3152KB) ( 890 )   
References | Related Articles | Metrics
In order to improve the development efficiency of digital power supplies,TI has developed a software frequency response analyzer (SFRA) tool for C2000 series processors to test the frequency characteristics of digital power supplies.Based on the analysis of the SFRA principle and the application requirements of AC servo system,in this paper,SFRA is applied to the frequency characteristic test of AC servo system.Experimental results show that the SFRA method can also be used in the AC servo system.On the one hand,it saves the cost of the system.On the other hand,it expands the application scenario of the method and has positive research value.
Optimization of Scheduling and Maintenance Strategy for Navigation Aircraft Operation
CHEN Yu-tao, XU Wen-chao, ZHAO Zhao-na, LIU Hong-en, WANG Hao
Computer Science. 2020, 47 (11A): 632-637.  doi:10.11896/jsjkx.200600053
Abstract PDF(2209KB) ( 761 )   
References | Related Articles | Metrics
Owing to the power grid maintenance of navigation aircraft company with characters of diverse task types,scattered operation locations and uncertain disturbances,crews and maintenance personnel who perform tasks can achieve high-quality main-tenance and repair capabilities need to be integrated in the actual operation of navigation aircraft.Considering the control objectives such as operational performance under safety priority,the characteristics of navigation aircraft operations and maintenance planning are analyzed.Combining the practical experience of navigation aviation operation control and scheduling and the constraints of operation process,the fairness and uniformity strategies which are compatible with operation process and safety stan-dard are proposed.The model for general aircraft operation and maintenance scheduling tasks is established,and an optimization algorithm adapted to the navigation aircraft maintenance schedule based on the tabu search algorithm is designed constructing domain movement rules for aircraft and task sets.As a result of actual data simulation,after the strategy optimization,the fairness and uniformity of the actual allocation results have increased by 71.02% and 19.07% respectively in terms of task capacity and scheduling results compared with actual schedules.
PIFA-based Evaluation Platform for Speech Recognition System
CUI Yang, LIU Chang-hong
Computer Science. 2020, 47 (11A): 638-641.  doi:10.11896/jsjkx.200500097
Abstract PDF(2227KB) ( 682 )   
References | Related Articles | Metrics
There are many application fields of speech recognition technology,and the performance evaluation of the speech recognition system plays an important role in promoting the development of speech recognition technology.PIFA (PerformanceInfluen-cing Factor Analysis) based architecture of evaluation platform for speech recognition system is proposed by summarizing va-rious existing speech recognition evaluation methods to compare the performance of various speech systems better,and a platform with PIFA is implemented.The platform involves two key concepts,evaluation database and evaluation project,and includes mo-dules of evaluation data generation,data analysis,performance evaluation index calculation and performance influencing factors analysis.It can deal with multiple recognition tasks and many kinds of data,especially for speech recognition with large vocabulary and continuity.The evaluation results can be statistically analyzed by the platform to reveal the influence of various data attri-butes on the performance of the recognition system,and help the improvement of the speech recognition system.
Monitoring System of Traffic Safety Based on Information Fusion Technology
SUN Zhi-gang, WANG Guo-tao, JIANG Ai-ping, GAO Meng-meng, LIU Jin-gang
Computer Science. 2020, 47 (11A): 642-650.  doi:10.11896/jsjkx.200400133
Abstract PDF(4120KB) ( 825 )   
References | Related Articles | Metrics
Aiming at the problems of existing driving system,such as lack of real-time monitoring and early warning of vehicle conditions,lack of judgment,alarm and guidance of drivers' drunk driving and fatigue driving behavior,and the unknown of dri-vers' driving conditions remotely,a monitoring system of traffic safety based on information fusion technology is designed.The system consists of a vehicle monitoring terminal,a remote monitoring and management platform and an Android mobile terminal,and the vehicle monitoring terminal is divided into data processing part and prompt part.The data processing part uses STM32 as a microcontroller to collect and send parameter information of vehicle condition,to realize local early warning judgment and remote alarm,etc.The prompt part uses the PC tablet based on Android system as the development carrier to realize early warning voice prompt,early warning threshold setting and intelligent guidance service.The remote monitoring and management platform is developed based on C# language to realize remote data processing,early warning threshold setting and map positioning,while the mobile terminal developed based on Java language and Android system realizes map positioning,path planning and intelligent guidance services.The test results show that the designed monitoring system runs stably and the data transmission is reliable,which effectively complements the disadvantages of the existing driving system and has a high application value.
Fast Design and Verification of Flight Control Law for Small Compound UAV
TAN Si-yang
Computer Science. 2020, 47 (11A): 651-656.  doi:10.11896/jsjkx.200100026
Abstract PDF(3157KB) ( 1664 )   
References | Related Articles | Metrics
With the in-depth study of VTOL aircraft,the design of its flight control law has gradually become the focus of research.The rapid design and verification method of the flight control law of small compound UAV is studied.First,a small compound VTOL aircraft layout is proposed.Then,the mission profile is set up and the flight control law is designed based on it.The model of flight control law is built based on MATLAB/Simulink platform while the model of aircraft ontology is built based on Amesim platform.The flight control law algorithm is validated by modeling and simulation,and the fast iterative optimization of flight control law design scheme is realized by analyzing flight performance simulation results.Meantime,the fast design and verification solution can be used as a reference for the joint simulation and analysis of control systems and controlled objects in other aerospace fields.
Online Judge System Based on Multiple Judgement Modes
WANG Gui-ping, LIU Jun, LUO Xian, CHEN Wang-qiao
Computer Science. 2020, 47 (11A): 657-661.  doi:10.11896/jsjkx.200500048
Abstract PDF(1862KB) ( 1026 )   
References | Related Articles | Metrics
Software development abilities including programming are the basic abilities for students from electronic and information majors.Online programming practice and programming contest can cultivate students' learning interests and improve students' programming practice abilityies.Online Judge (OJ) systems play important roles in practical teaching of programming courses,as well as in promoting of programming contests.In recent years,various programming contests are increasingly being promoted in domestic universities of China.The extent and influence of these contests areincreasing.There is an urgent requirement for an OJ system that can adapt to different judgement modes.This paper analyzes the present situation of OJ systems.It summarizes three judgement modes,i.e.,single data set,multiple data sets,and weighted multiple data sets.It designs and deve-lops an OJ System based on multiple judgement modes (MOJ),and implements these three judgement modes in MOJ.MOJ can be adopted to different programming contests,as well as the practical teaching of programming courses.MOJ plays important roles in teaching and contests.
Architecture Design of Multiple Spacecraft Comprehensive Assessment System Based on Middle Platform
LIU Fan, WANG Li, LIU Kai, HUANG Xiao-feng
Computer Science. 2020, 47 (11A): 662-666.  doi:10.11896/jsjkx.200400084
Abstract PDF(2457KB) ( 850 )   
References | Related Articles | Metrics
In order to solve the problem of rapid response of the measurement and control department to the needs of multiple spacecraft system comprehensive assessment,this paper designs the system architecture based on business platform and data platform,after in-depth analysis of the task requirements and technical characteristics of middle platform.The system architecture uses the evaluation scenario driven mode,and the business platform drives the data platform to complete the base data resource integration and evaluation analysis according to the scenario.The middle platform architecture proposed in this paper aims to highlight platform sharing and strengthen service reuse capability,which is of reference value for spacecraft measurement and control department to improve evaluation task response efficiency and form innovative information service capability.
Industrial Equipment Management System for Predictive Maintenance
YU Xin-yi, SHI Tian-feng, TANG Quan-rui, YIN Hui-wu, OU Lin-lin
Computer Science. 2020, 47 (11A): 667-672.  doi:10.11896/jsjkx.200100091
Abstract PDF(3083KB) ( 1639 )   
References | Related Articles | Metrics
An industrial equipment management system for predictive maintenance is developed to solve the problems of chaotic equipment management and high maintenance costs in the manufacturing industry.The system is developed based on SpringBoot framework and Vue front-end separation mode that the coupling is reduced;the equipment management module is designed according to the actual production that realizes the basic information and production data management of the equipment.A good front-end interface for human-computer interaction is developed to achieve the purpose of visual management of equipment information.The data storage module is designed through integrating multiple databases to solve the problem of reading and writing different types of data in the system.The equipment maintenance module is designed based on the Spark big data processing framework to perform online analysis of equipment real-time data.In order to achieve the goal of predictive maintenance of equipment,machine learning regression algorithms are used to train predictive models on historical data to achieve real-time monitoring of equipment status and prediction of remaining life.Finally,the feasibility of the designed management system is verified by industrial robot equipment experiments.
Design and Realization of Laboratory Equipment Management System Based on QR Code Technology and WeChat Mini-program Technology
CHEN Jing-xian
Computer Science. 2020, 47 (11A): 673-677.  doi:10.11896/jsjkx.200400063
Abstract PDF(2368KB) ( 787 )   
References | Related Articles | Metrics
In view of the wide variety and large quantity of laboratory equipment,the equipment management work is complicated and cumbersome,the work intensity and pressure of laboratory management personnel are high.This paper proposes a laboratory equipment management system based on two-dimensional code technology and WeChat applet.Its functions include personnel management module,equipment management module and system management module.The equipment management module includes verification management,equipment borrowing management,equipment allocation management,equipment maintenance management,equipment obsolescence management,etc.The technology of the system uses a separate front-end and back-end architecture.The front-end includes the mobile terminal-WeChat applet and the PC terminal-Web browser,all using MVVM (Mo-del-View-ViewModel) design ideas.The back-end uses the SSM (SpringMVC+Spring+MyBatis) framework to achieve low-coupling,high-cohesion programs.Laboratory equipment uses QR codes as identification and marking,the operating cost is reduced and the efficiency is improved.The realization of this system reduces the repetitive workload of laboratory managers,improves the management efficiency of laboratory equipment,reduces management costs,and better promotes the scientific development of laboratories