Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 47 Issue 6A, 16 June 2020
  
Artificial Intelligence
Science and Technology Strategy Evaluation Based on Entropy Fuzzy AHP
LIU Zi-qi, GUO Bing-hui, CHENG Zhen, YANG Xiao-bo and YIN Zi-qiao
Computer Science. 2020, 47 (6A): 1-5.  doi:10.11896/JsJkx.190700078
Abstract PDF(1828KB) ( 1911 )   
References | Related Articles | Metrics
The scientificity of the evaluation system is directly related to the degree of understanding of the merits and demerits of the evaluated obJects.It is of great significance to apply the scientific methods to the construction of the evaluation system.In this paper,aiming at the problem that the traditional Fuzzy Analytic Hierarchy Process (FAHP) relies on experts’ evaluation of indicators and artificially given expert coefficients to calculate index weights with strong subJective factors,which lead to inaccurate results,the Entropy Fuzzy Analytic Hierarchy Process was proposed.Firstly,the expert survey results were analyzed to obtain the Judgment matrix,then the index weights calculation method based on expert coefficients in fuzzy analytic hierarchy process was changed to entropy method,finally,the evaluation scores of the obJect-oriented were obtained by using the fuzzy evaluation method.In order to test the obJectivity and effectiveness of the algorithm,the pre-existing effectiveness index of national defense science and technology strategy was taken as the research obJect,and the white paper of “2016 China Aerospace” was used as an example.The results show that the score obtained by entropy fuzzy analytic hierarchy process is greatly improved,indicating the effectiveness of the combination of entropy method and fuzzy analytic hierarchy process.
Automatic Summarization Method Based on Primary and Secondary Relation Feature
ZHANG Ying, ZHANG Yi-fei, WANG Zhong-qing and WANG Hong-ling
Computer Science. 2020, 47 (6A): 6-11.  doi:10.11896/JsJkx.191000007
Abstract PDF(2075KB) ( 892 )   
References | Related Articles | Metrics
Automatic summarization technology refers to providing users with a concise text description by compressing and refining the original text while retaining the core idea of document.Usually,the traditional method only considers the shallow textual semantic information and neglects the guiding role of the structure information such as the primary and secondary relations in core sentences extraction.Therefore,this paper proposes an automatic summarization method based on the primary and secondary relation feature.This method utilizes the neural network to construct a single document extractive summarization model based on primary and secondary relation feature.The Bi-LSTM neural network model is used to encode the sentence information and primary and secondary relation information,and the LSTM neural network is utilized to summarize the encoded information.Experimental results show that the proposed method has a significant improvement in accuracy,stability and the ROUGE evaluation index compared with the current mainstream single document extractive summarization methods.
Text Representation and Classification Algorithm Based on Adversarial Training
ZHANG Xiao-hui, YU Shuang-yuan, WANG Quan-xin and XU Bao-min
Computer Science. 2020, 47 (6A): 12-16.  doi:10.11896/JsJkx.200200076
Abstract PDF(2300KB) ( 1307 )   
References | Related Articles | Metrics
Text representation and classification are hot topics in the field of natural language understanding.There are many text classification methods,including convolutional networks,recursive networks,self-attention mechanisms and their combinations.complex networks cannot fundamentally improve the performance of classificationtext representation is the key to text classification.In order to obtain a good text representation and improve the performance of text classification,an LSTM-based representation learning-text classification model is constructed,where the representation learning model uses a language model to provide the text classification model with initialized text representation and network parameters.main work is to adversarial training methods that is,add perturbations to word vectors to construct adversarial samples,and train the original samples.By improving the model’s ability adversarial samples,the quality of text representation,and the generalization performance of the model, the classification effect of the classification model.xperimental results show that the method based on adversarial training achieves 92.9%,93.2% and 98.9% on the benchmark datasets AGNews,IMDBDBpedia,that the method can improve the classification effect of the model.
False Message Propagation Suppression Based on Influence Maximization
CHEN Jin-yin, ZHANG Dun-Jie, LIN Xiang, XU Xiao-dong and ZHU Zi-ling
Computer Science. 2020, 47 (6A): 17-23.  doi:10.11896/JsJkx.190900086
Abstract PDF(3228KB) ( 1389 )   
References | Related Articles | Metrics
With the wide development of various social media,the security issues caused by news transmission in social networks are becoming increasingly prominent.Especially,the propagation of false messages brings great threat to the security of cyberspace.In order to effectively control the propagation of false messages in cyberspace,and change the network topology as little as possible to suppress the false messages propagation,this paper proposed a false message propagation suppression method based on influence maximization.Firstly,it predicts the message propagation based on information cascade prediction model and puts forward two algorithms named Louvain Clustered Local Degree Centrality and Random Maximum Degree (LCLD,RMD) based on the idea of node influence maximization,to obtain the most influential nodes set,then use TextCNN to classify the false messages and filter out a small number of key nodes in the nodes set that publish false messages.The modified propagation network re-predicts the message propagation by prediction model.It is found that the message propagation is significantly suppressed compared to the network without modification.Finally,the proposed method is verified on the BuzzFeedNews dataset.It is proved by experiments that the prediction model based on information cascade can fit the actual propagation more accurately,and the prediction results of the modified network input prediction model show that the false message propagation can be suppressed.Experimental results show that the influence maximization algorithms can effectively suppress the propagation of false messages by deleting a few nodes containing false messages,which verifies the effectiveness of the proposed method.
Main Melody Extraction Method Based on Saliency Enhancement
JIN Wen-qing and HAN Fang
Computer Science. 2020, 47 (6A): 24-28.  doi:10.11896/JsJkx.191200022
Abstract PDF(2169KB) ( 1239 )   
References | Related Articles | Metrics
In the field of music information retrieval,the extraction of the main melody is a very difficult task.In the polyphonic music,different sound sources interact with each other,leading to discontinuity of the main melody’s pitch sequence,which reduces the accuracy of the original pitch of the melody.In response to this problem,a CNN-CRF model with enhanced pitch salie-ncy representation and automatic melody tracking is designed.In order to better extract the harmonic information,it is proposed to enhance the initial saliency representation of the SF-NMF calculation by structured data,and to combine the melody characte-ristics and the smooth constraint conditions of the pitch under the dynamic programming framework to find the optimal evolution path.Experiments show that the proposed method has better melody extraction results,and the original pitch accuracy on both test data sets is higher than that of other reference methods.Comparing different inputs validates that structured data can enhance the significance representation and make up for the misJudgement of pitch by SF-NMF.
Establishment of Dynamic Protein Network Model Based on Attenuation Coefficient for Key Protein Prediction
DAI Cai-yan, HE Ju, HU Kong-fa, DING You-wei and LI Xin-xia
Computer Science. 2020, 47 (6A): 29-33.  doi:10.11896/JsJkx.190800071
Abstract PDF(2849KB) ( 735 )   
References | Related Articles | Metrics
In the transformation process of biological system,the evolution of protein is not static,but dynamic.The evolutionary mechanism of protein interaction can be well described by constructing a model to study protein interaction network.However,when we study protein-protein interaction by using the method of structural model,we should consider the attenuation of historicprotein interaction over time in the process of protein evolution,rather than regard the effect of proteins at different times as the same or directly ignore them.In this paper,a method of building dynamic protein network model based on attenuation coefficient was proposed.When establishing the model,a reasonable attenuation coefficient is used to record the changes of protein interaction,which is convenient for later researches.After taking reasonable attenuation coefficient through experiments,using the same algorithm to run on different network models,the results verify the effectiveness of the proposed algorithm.
Traffic Strategy in Dense Crowd Environments Based on Expandable Path
GAO Qing-Ji, WANG Wen-bo, HOU Shi-hao and XING Zhi-wei
Computer Science. 2020, 47 (6A): 34-39.  doi:10.11896/JsJkx.191100191
Abstract PDF(4431KB) ( 687 )   
References | Related Articles | Metrics
Safe and efficient traffic in dense crowd environments is an important issue to be solved for robots in applications such as airport terminals,etc.The difficulty lies in adapting to the uncertainty of pedestrian behavior and the variability of feasible paths.Referring the social force model of passage and avoidance in crowds,this paper proposed the path expandable view and the traffic strategy in dense crowd environments.Firstly,this paper built the expandable path model,analyzed the space-time relationship between pedestrians and robots,and extracted the expandable path with the path passage probability and credibility.Secondly,the distance convex hull method to select the expandable path set was proposed.On this basis,Breadth First Search was used to establish expandable path set and remove redundant paths.Finally,this paper formulated the traffic strategy of robots in various environments according to the optimal path evaluation function and the right of way rule.Simulation results show that the method can achieve higher traffic efficiency in dense crowd environments.
Relation Extraction Method Combining Encyclopedia Knowledge and Sentence Semantic Features
YU Yi-lin, TIAN Hong-tao, GAO Jian-wei and WAN Huai-yu
Computer Science. 2020, 47 (6A): 40-44.  doi:10.11896/JsJkx.190700042
Abstract PDF(3071KB) ( 1232 )   
References | Related Articles | Metrics
Relation extraction is one of the important research topics in the field of information extraction.Its typical application scenarios include knowledge graphs,question answering systems,machine translation,etc.Recently,deep learning has been applied in a large amount of relation extraction researches,and deep neural networks based relationship extraction method performs much better than the traditional methods in many situations.However,most of the current deep neural network-based relation extraction methods Just rely on the corpus itself and lack the introduction of external knowledge.To address this issue,this paper proposed a neural network model,which combined encyclopedia knowledge and semantic features of sentences for relation extraction.The model introduced the description information of entities in encyclopedia as external knowledge,and dynamically extracted entity features through attention mechanism.Meanwhile,it employed bidirectional LSTM networks to extract the semantic features contained in the sentence.Finally,the model combined the entity features and the sentence semantic features for relation extraction.A series of experiments were carried out based on a manually labeled dataset.Experimental results demonstrate that the proposed model is superior to other existing relationship extraction methods.
Research on Chinese Patent Summarization Based on Patented Structure
SHU Yun-feng and WANG Zhong-qing
Computer Science. 2020, 47 (6A): 45-48.  doi:10.11896/JsJkx.190500028
Abstract PDF(1859KB) ( 1060 )   
References | Related Articles | Metrics
Text summarization aims to provide a concise description of the content by compressing and refining the original text.For the Chinese patented text,an algorithm for generating patent summarization based on the PatentRank algorithm is proposed.Firstly,the candidate sentence groups are redundantly processed to remove the sentences with high similarity in the candidate sentence groups.Then,three different similarity calculation methods are constructed for the patent claims and descriptions to calculate the weights between sentences.Finally,the sentence with high weight is selected as the summarization of the patent.The algorithm has achieved good results in the selected datasets.Experimental results demonstrate that the proposed method substantially outperforms existing approaches in terms of ROUGE measurement.
Prediction of Vessel Load Based on Vessel Automatic Identification System and Artificial Neural Network
WANG Peng, SU Wei, ZHANG Jiu-wen, LIU Ying-Jie and WANG Zhen-rui
Computer Science. 2020, 47 (6A): 49-53.  doi:10.11896/JsJkx.191000074
Abstract PDF(2994KB) ( 1034 )   
References | Related Articles | Metrics
Traditional acquisition methods of vessel load are mostly based on manual observation,empirical calculation and regression analysis.These methods are usually difficult to operate,which may also have a low level of automation.On the other way,the calculation process is full of a large number of outdated empirical values and statistical formulas which sometimes need to be updated timely with the changes of vessel type.At present,it is a difficult task to obtain vessel’s dynamic load all around the world.This paper presented a prediction method of vessel load based on vessel automatic identification system and artificial neural network,analyzed the mathematical relationship between vessel’s length,breadth,draught,vessel types and vessel load,established a multi-layer artificial neural network with Adam-Dropout optimization,found out the best input types of artificial neural network and its suitable vessel types.Experiments show that the prediction result is the best when the inputs of ANN are length,breadth,draught and vessel type,the MAPE values of ANN can reach to 7.63% while the minimum APE value reaches to 0.05%.The prediction result is the best when the number of hidden layers is 4 and the number of neurons is 11.The method is suitable for crude oil tankers,bulk carriers,chemical tankers,container vessels,liquefied natural gas tankers,liquefied petroleum gas tankers,oil products tankers,grocery cargos and refrigeration vessels.The MAPE values of them are all less than 15%.
Parametric-free Filled Function Algorithm for Unconstrained Optimization
ZHANG Yu-qin, ZHANG Jian-liang and FENG Xiang-dong
Computer Science. 2020, 47 (6A): 54-57.  doi:10.11896/JsJkx.191000179
Abstract PDF(1954KB) ( 675 )   
References | Related Articles | Metrics
Filled function method is a kind of important method for solving unconstrained optimization problem.The key of the method is to construct a filled function with good properties,simple form and easy to solve the minimum value.Based on the definition of filled function and certain conditions of the obJective function for unconstrained global optimization problem,a non-parameter filled function is proposed for solving this problem,which is simple and easy to be calculated.For the filled function,firstly,under suitable assumptions,some properties of filled function are studied and proved.Secondly,according to the related pro-perties,an algorithm suitable for this filled function algorithm is established.The filled function algorithm consists of two phases:the minimization phase and the filling phase.The two phases alternate until the termination criterion is met.Finally,through classical examples,numerical experiments are carried out and compared with the results of other literatures.Experiments results show that not only the filled function is feasible and the algorithm is effective,but also the results are accurate and the number of iterations is less.
Application of Power Load Prediction Based on Improved Support Vector Regression Machine
TANG Cheng-e and WEI Jun
Computer Science. 2020, 47 (6A): 58-65.  doi:10.11896/JsJkx.191000042
Abstract PDF(2390KB) ( 726 )   
References | Related Articles | Metrics
Electricity forecasting is an important engineering application.In order to solve the accuracy problem of dynamical granular support vector regression machine for power load forecasting (DGSVRM),this paper proposes a hybrid algorithm of glowworm swarm optimization (GSO) and pattern search (PS) to optimize the key parameters of DGSVRM forecasting model.Simulation results show that the prediction accuracy is greatly improved by optimizing the parameters of the prediction model.
Signal Timing Scheme Recommendation Algorithm Based on Intersection Similarity
LUO Jia-lei and MENG Li-min
Computer Science. 2020, 47 (6A): 66-69.  doi:10.11896/JsJkx.190600131
Abstract PDF(3315KB) ( 932 )   
References | Related Articles | Metrics
Signal timing control is an important part of urban traffic control system,and traditional signal timing work requires a lot of manpower and time cost,and the implementation effect depends on the experience level of the staff.It is difficult to meet the needs of real-time regulation.Therefore,a signal timing scheme recommendation algorithm based on intersection similarity is proposed.The intersection similarity calculation is performed based on various static and dynamic attributes of the intersection to improve the accuracy of intersection matching.According to the recommendation method of collaborative filtering,the scheme of similar intersections is recommended to the target intersection to improve the accuracy and effectiveness of the signal timing work.The experimental results show that the proposed algorithm can accurately recommend the signal timing scheme and has lower algorithm complexity.It is suitable for signal timing scheme recommendation in the context of massive data.
Improving Hi-C Data Resolution with Deep Convolutional Neural Networks
CHENG Zhe, BAI Qian, ZHANG Hao, WANG Shi-pu and LIANG Yu
Computer Science. 2020, 47 (6A): 70-74.  doi:10.11896/JsJkx.190900065
Abstract PDF(3839KB) ( 848 )   
References | Related Articles | Metrics
Hi-C technology measures the frequency of all paired-interaction in the entire genome.It has become one of the most popular tools for studying the 3D structure of genomes.In general,Hi-C data-based studies require sequencing of a large number of Chromosome data,while Hi-C data with lower sequencing depth,although less expensive,is not sufficient to provide sufficient biological information for subsequent studies.Since the Hi-C data contains similar sub-patterns and has data continuity within a certain area,it can be predicted.This paper explored an improved method based on convolutional neural network model.It predicts the core Hi-C values in a larger range and extends the depth and receptive field of the convolutional neural network,predicts the original sequencing reading of Hi-C by 1/16 of the original sequencing readings.The experimental results were measured by the Pearson correlation coefficient and the Spearman correlation coefficient,and the apparent interaction pairs were analyzed using Fit-Hi-C,and the state analyses of 12 chrom HMM-marked chromatin with ChromHMM were called.The experimental results show that the prediction results are not only close to the numerical distribution,but also more reliable than the low-resolution Hi-C data in terms of site interaction information and chromatin state.
Improved SVM+BP Algorithm for Muscle Force Prediction Based on sEMG
SONG Yan, HU Rong-hua, GUO Fu-min, YUAN Xin-liang and XIONG Rui-yang
Computer Science. 2020, 47 (6A): 75-78.  doi:10.11896/JsJkx.190900143
Abstract PDF(3736KB) ( 1148 )   
References | Related Articles | Metrics
In the process of rehabilitation training,patients need the assistance of external equipment to complete the exercise.During this process,the patient’s muscle function gradually recovers,and the auxiliary force provided by the auxiloiary equipments gradually becomes smaller.This requires rehabilition training equipments to be able to accurately predict a wide range of muscle strength.Aiming at this problem,a stratified algorithm based on surface electromyography (sEMG) for accurately predicting muscle strength was proposed.In the first stratified algorithm,the Particle swarm optimization (PSO) algorithm is used to improve the Support Vector Machines (SVM) algorithm,to solve the problems of noise in sEMG and nonlinear separability of the signal itself.The improved SVM is used to build a three classifier and the muscle force is prehminaril divded into three categovies:high,medium and low.The second stratified algorithm uses three corresponding to different muscle strength BP neural networks to accurately predict muscle strength.Experiment results show that 20 repeated calculations gave an average absolute error of 0.58 and a variance of 0.18.It is concluded that the combined model scheme using PSO_SVM+BP can achieve the accuracy of muscle strength prediction.
Wall-following Navigation of Mobile Robot Based on Fuzzy-based Information Decomposition and Control Rules
FANG Meng-lin, TANG Wen-bing, HUANG Hong-yun and DING Zuo-hua
Computer Science. 2020, 47 (6A): 79-83.  doi:10.11896/JsJkx.191000158
Abstract PDF(1859KB) ( 731 )   
References | Related Articles | Metrics
Due to the high real-time requirements of robot navigation task and the nonlinearity of the robot itself,it is difficult to model accurately,and the rule-based control has good interpretability and real-time response generally.Therefore,a method of robot wall-following navigation based on fuzzy-based information decomposition (FID) and control rules is proposed.In UCI robot navigation data set,the original classimbalanced data set is over-sampled by FID,and then SVM is trained,and control rules are extracted from SVM.In the process of extracting rules,only support vectors are used to reduce the number of rules and improve the real-time performance.These support vectors are used to train the random forest,which is applied to extract control rules.The experimental results show that,on the same data set,the average F1 score of the proposed method is 0.994,and the recall rate of the minority class increases by 8.09% on average,compared with the six classic models such as decision tree.Compared with other rule extraction models,the rule extraction method from SVM can reduce 171.33 rules on average,and the average decision time per sample on the test sample is only 3.145μs.
Evaluation Model Construction Method Based on Quantum Dissipative Particle Swarm Optimization
ZHANG Su-mei and ZHANG Bo-tao
Computer Science. 2020, 47 (6A): 84-88.  doi:10.11896/JsJkx.190900148
Abstract PDF(2877KB) ( 730 )   
References | Related Articles | Metrics
In this paper,a quantum dissipative particle swarm optimization (QD-PSO) algorithm is proposed.Each particle information bit is represented by double eigenstate superposition.Quantum information carrier is applied to population differentiation of particle swarm,and the adaptive adfustment stratgey of inertia weight is designed.Four classical benchmark functions are tes-ted.The results show that the proposed algorithm has obvious advantages over standard particle swarm optimization (PSO),expo-nential dissipative particle swarm optimization (APSO) and inertia decline dissipative particle swarm optimization (W-G-PSO).The algorithm is applied to the construction of a teaching evaluation model to overcome the interference of subJective consciousness on obJective evaluation.The results show that the model can be highly matched with empirical data.It has higher eva-luation accuracy than the artificial experience model.
Multiclass Cost-sensitive Classification Based on Error Correcting Output Codes
WU Chong-ming, WANG Xiao-dan, XUE Ai-Jun and LAI Jie
Computer Science. 2020, 47 (6A): 89-94.  doi:10.11896/JsJkx.190500089
Abstract PDF(2082KB) ( 690 )   
References | Related Articles | Metrics
Approach of multiclass cost-sensitive classification based on error correcting output codes is studied in this paper,and a new framework to decompose the complex multiclass cost-sensitive classification problem into a series of binary cost-sensitive classification problems is proposed.In order to obtain the binary cost matrix of each binary cost-sensitive base classifier,a method of computing the expected misclassification costs from the given multiclass cost matrix is proposed,and the general formula for computing the binary costs are given.Experimental results on artificial datasets and UCI datasets show that the proposed method has similar or even better performance in comparison with the existing methods.
Polynomial Time Community Detection Algorithm Based on Coupling Strength
YANG Zhuo-xuan, MA Yuan-pei and YAN Guan
Computer Science. 2020, 47 (6A): 102-107.  doi:10.11896/JsJkx.190900170
Abstract PDF(2745KB) ( 609 )   
References | Related Articles | Metrics
In capital markets,groups can be divided according to how closely traders are connected,resulting in specific community structures.Community structure detection is one of the most interesting issues in the study of social networks.However,there are few polynomial time algorithms that can detect the community structure quickly and accurately.Inspired by the famous theory of modularity design optimization,in this paper,the idea of using a novel k-strength relationship to represent the coupling distance between two nodes is proposed.Community structure detection algorithm is presented using a generalized modularity measure based on the k-strength matrix.To obtain the optimal number of communities,a new parameter-free structure is adopted,which uses the difference of eigenvalues of specific transition matrix as the boundary of community classification.Finally,the algorithm is applied on both benchmark network and real network.Theoretical analysis and experiments show that the algorithm can detect communities quickly and accurately,and is easy to be extended to large scale real networks.
Heuristic Algorithm Based on Block Mining and Recombination for Permutation Flow-shop Scheduling Problem
CHEN Meng-hui, CAO Qian-feng and LAN Yan-qi
Computer Science. 2020, 47 (6A): 108-113.  doi:10.11896/JsJkx.190300151
Abstract PDF(2092KB) ( 691 )   
References | Related Articles | Metrics
Combinatorial optimization is widely used in task problems,such as traveling salesman problem (TSP),scheduling problems.In this study,an evolutionary-based block model (EBBM) was proposed to enhance the speed of convergence so as to avoid premature convergenceproblem.The main idea of blocks is to find key blocks from chromosomes and use these blocks to improve evolutionary algorithms (EAs) to solve combinatorial optimization problems (COPs).Block is a kind of information that explores the effects of individual genes on the evolution of chromosomes,containing information that is helpful for evolution and information that hinders evolution.In this paper,these two different kinds of information are stored and used.The evolution direction of the information providing algorithm,through the influence of two different information,not only improves the convergence speed of the algorithm,but also improves the diversity of the algorithm solution,so as to achieve goals of high stability and good solution quality.The block mechanism proposed includs building a probability matrix,generating blocks by associated rule and applying blocks to construct artificial chromosomes.Since the block is used as the basic unit for constructing the artificial solution in this paper,the blocks mined by the association rules not only have diversity,but also control the block information strength required for the evolution process according to the set confidence level.Finally,in order to confirm the quality of solutions,the proposed approach is experimentally implemented on permutation flow-shop scheduling problem (PFSP).According to the average error rate,the optimal error rate and the convergence curve,the solution effect of the algorithm is discussed,the experimental results show that the block mechanism by positive and negative information is useful to improve the speed of convergence and avoid premature convergence problem.In additional,the results also demonstrate that BBEM is applicable and efficient to the COPs.
Partheno-genetic Algorithm for Solving Static Rebalance Problem of Bicycle Sharing System
FENG Bing-chao and WU Jing-li
Computer Science. 2020, 47 (6A): 114-118.  doi:10.11896/JsJkx.190700120
Abstract PDF(2044KB) ( 729 )   
References | Related Articles | Metrics
The bicycle sharing system has the advantages of improving urban traffic travel structure and reducing traffic pollution.The relative balance of the number of bicycles at each site is very important for improving the utilization of the sharing system,and the problem of rebalancing bicycle sharing systemis proposed.Since the problem is NP-hard one,Fábio et al.proposed the ILS algorithm for solving the single-vehicle and multiple-visit static bicycle rebalancing case in 2017,and obtained good results.However,the ILS algorithm has very complicated structure,and the repair operator,which consumes a lot of time,has a good chance of generating inferior solution.To solve this problem,in this paper,a partheno-genetic algorithm based method P-SMSBR is presented.Amore concise optimization process is designed,and decimal code is used to represent a vehicle path solution.Seven mutation operators are introduced,and elite strategy is adopted to enhance the search ability of the algorithm.A large number of simulation and real data were adopted to test the performance of the algorithm.The experimental results indicate that the P-SMSBR algorithm proposed in this paper has better optimization effect,which can obtain a shorter vehicle path than that obtained by the ILS algorithm in a shorter time.In addition,the P-SMSBR algorithm shows moresignificant advantages with the increase of the number of sites.It is an effective method for solving the single-vehicle and multiple-visit static bicycle rebalancing problem.
Segment Weighted Cuckoo Algorithm and Its Application
ZANG Rui and LIU Xiao-xiao
Computer Science. 2020, 47 (6A): 119-123.  doi:10.11896/JsJkx.190400036
Abstract PDF(1946KB) ( 855 )   
References | Related Articles | Metrics
In order to solve the coordination problem between cuckoo local search and global search,improve the convergence speed in the later stage,and search the segmentation process of the algorithm,an improved cuckoo algorithm is proposed by introducing a dynamic adaptive step control variable and the corresponding segment weighted position update formula.The improved algorithm is verified by selecting 12 classical constrained optimization problems and some structural optimization design problems.The research results show that,compared with other algorithms,this algorithm is more efficient for most of the above problems.
Multi-population Genetic Algorithm for Multi-skill Resource-constrained ProJect Scheduling Problem
YAO Min
Computer Science. 2020, 47 (6A): 124-129.  doi:10.11896/JsJkx.190900123
Abstract PDF(2245KB) ( 681 )   
References | Related Articles | Metrics
Multi-skill resource resource is popularly existing in the process of production and manufacturing,which improves resource utilization and production efficiency.This paper makes the multi-skill resource as the obJect and presents a mathematical model of multi-skill resource-constrained proJect scheduling problem (MSRCPSP) with the obJective of minimizing the makespan of the proJect.In order to solve the disadvantage that the existing genetic algorithm converges prematurely and the whole algorithm cannot solve the global optimal value,an improved multi-population genetic algorithm is proposed to solve the problem model.The algorithm is designed to code Job priority list and introduced the cross immigration operator to promote the co-evolution of different groups.In the decoding process,a heuristic flexible resource skill allocation algorithm is used to allocate resources for Jobs,and an improved serial scheduling generation scheme is used to schedule Jobs.Finally,numerical experiments are carried out with standard case library PSPLIB to verify the effectiveness of the proposed algorithm in solving MSRCPSP.
Novel DQN Algorithm Based on Function Approximation and Collaborative Update Mechanism
LIU Qing-song, CHEN Jian-ping, FU Qi-ming, GAO Zhen, LU You and WU Hong-Jie
Computer Science. 2020, 47 (6A): 130-134.  doi:10.11896/JsJkx.190700038
Abstract PDF(3216KB) ( 852 )   
References | Related Articles | Metrics
With respect to the problem that the classical DQN (Deep Q-Network) algorithm has slow convergence in the early stage of the training process,this paper proposes a novel DQN algorithm based on function approximation and collaborative update mechanism,which combines the linear function method with the classical DQN algorithm.In the early stage of the training,the linear function network is used to replace the behavior value function network and proposed an update rule from the strategy value function,which can accelerate the parameter optimization process of the neural network and speed up the convergence rate.The proposed algorithm and DQN algorithm are applied to the CartPole and Mountain Car problems,and the experimental results show that the proposed algorithm has faster convergence rate.
Improved Bat Optimization Algorithm Based on Compass Operator
YANG Kai-zhong, TI Meng-tao and XIE Ying-bai
Computer Science. 2020, 47 (6A): 135-138.  doi:10.11896/JsJkx.190800112
Abstract PDF(2124KB) ( 879 )   
References | Related Articles | Metrics
Optimization problems widely exist in various fields such as engineering technology and economic management.Due to the complexity of practical problems,traditional optimization methods are difficult to solve these problems.With the advancement of iterative calculation process,the standard bat algorithm is prone to fall into local optimality and poor population diversity in the later stage of evolution.Although the current bat algorithm has done a lot of work in performance improvement,it is difficult to meet the requirements of convergence speed and optimization accuracy.Aiming at these problems,the improved bat algorithm based on compass operator (BACO) was proposed.Based on the pigeon group optimization algorithm,the compass operator is introduced to help the bat population to quickly find high-quality individuals and improve the development and search ability ofbat algorithm.Then in the MATLAB environment,the algorithm is compared with the genetic algorithm and the standard bat algorithm by six classical multi-dimensional test functions.The results show that the evolutionary efficiency,optimization depth and success rate of the improved algorithm are greatly improved,which has great value for engineering complex functions.
Computer Graphics & Multimedia
Review of Deep Learning-based Action Recognition Algorithms
HE Lei, SHAO Zhan-peng, ZHANG Jian-hua and ZHOU Xiao-long
Computer Science. 2020, 47 (6A): 139-147.  doi:10.11896/JsJkx.190900176
Abstract PDF(2463KB) ( 5727 )   
References | Related Articles | Metrics
Action recognition is one of the fundamental problems in the field of computer vision.Currently,deep learning-based method is one of the mainstream methods for action recognition.In the existing researches,the traditional feature extraction method generally manually designs features that can represent video actions.However,this method usually requires a particular model to classify features,which cannot achieve high performance in real applications,while the introduction of deep learning brings a new development direction for action recognition.This paper briefly reviews on the action recognition methods based on deep learning.Firstly,the research background and significance of action recognition are introduced,and the traditional methods and deep learning-based methods are surveyed respectively.Then,the model architectures of three algorithms based on deep learning are classified and introduced,namely Two-Stream network,3DConvNet,CNN-LSTM network.Finally,the common used public validation datasets are introduced,and horizontal comparison is carried out on the recognition algorithms based on two data modes.Among these datasets,they can be grouped into two categories,RGB-based (e.g.,UCF101,HMDB51) and skeleton-based datasets (e.g.,NTU RGB+D).Experimental results show that the deep learning-based methods have made great advances,and the application of convolutional neural network has greatly promoted the development of action recognition algorithm.They gradually replace the traditional method based on manual features extraction.For RGB-based action recognition,Two-Stream and 3DConvNet are currently state-of-the-art methods.For skeleton-based action recognition,Two-Stream and spatiotemporal graph network achieve the best performance.
Application of Deep Learning in Photoacoustic Imaging
SUN Zheng and WANG Xin-yu
Computer Science. 2020, 47 (6A): 148-152.  doi:10.11896/JsJkx.190700046
Abstract PDF(2829KB) ( 1824 )   
References | Related Articles | Metrics
Photoacoustic imaging (PAI) is a multi-physics coupled non-invasive biomedical functional imaging technology.It combines the high contrast of pure optical imaging with the high spatial resolution of ultrasonic imaging,and can obtain the morpho-logy and functional components information of target tissues at the same time.In recent years,deep learning (DL) has been widely applied in medical image processing.The PAI imaging algorithms based on DL have attracted more and more attention of researchers.This paper reviewed the current application of DL in PAI image reconstruction,summarized the existing algorithms,analyzed their limits and forecasted the possible improvements in the future.
Rail Area Extraction Using Extended Haar-like Features and DBSCAN Clustering
LUO Jin-nan and ZHANG Ji-min
Computer Science. 2020, 47 (6A): 153-156.  doi:10.11896/JsJkx.200100008
Abstract PDF(3338KB) ( 742 )   
References | Related Articles | Metrics
Obstacle is a potential threat to the normal operation of trains.Rail area extraction is a key step in the process of using the train’s forward-looking camera to detect obstacles.Rail area extraction algorithm needs to be able to quickly and effectively detect the position of the rail while not occupying too much computing resources to keep the normal calculation speed of the obstacle recognition algorithm.This paper proposes a rail area extraction algorithm based on extended Haar-like feature extraction and DBSCAN density clustering.Firstly,the image is preprocessed by algorithms such as affine transformation,pooling,gray level equalization,and edge detection.Then the feature points of the rail are extracted based on multiple extended Haar-like features.Finally,the DBSCAN algorithm is used to extract valid feature data points and curve fitting is performed through these points.The experimental result shows that the algorithm can effectively detect the position of the rail area during the running of the train,and meet the practical needs of multiple scenarios and conditions
Color Difference Correction Algorithm Based on Multi Colors Space Information
TANG Jia-lin, ZHANG Chong, GUO Yan-feng, SU Bing-hua and SU Qing-lang
Computer Science. 2020, 47 (6A): 157-160.  doi:10.11896/JsJkx.190800026
Abstract PDF(3805KB) ( 1743 )   
References | Related Articles | Metrics
0The color of the color image and the obJect itself tends to deviate greatly when the image is acquired by the smart camera,due to the limitation of imaging conditions.In this paper,a new color correction method was proposed to reduce the deviation and improve the color reducibility.In this method,the most approximate Color Matrix is obtained by least square method in RGB color space,and then optimized in L*ab Color space.The color matrix obtained by this method improves the method of approximating the target value by exhaustive method in RGB space.In order to verify the effectiveness of the new algorithm,the standard D65 light source is used as the lighting source,Color Check24/ColorCheck+Vectorscope is used as the experimental obJect and the measurement correction result was obtained.Comparison experiments show that the color balance effect of the new method is better than that of the traditional color matrix calculation method.
Global Bilateral Segmentation Network for Segmantic Segmentation
REN Tian-ci, HUANG Xiang-sheng, DING Wei-li, AN Chong-yang and ZHAI Peng-bo
Computer Science. 2020, 47 (6A): 161-165.  doi:10.11896/JsJkx.191200127
Abstract PDF(2662KB) ( 810 )   
References | Related Articles | Metrics
The task of semantic segmentation is to predict the obJects according to the category at the pixel level.The difficulty lies in retaining enough spatial information and obtaining enough context information.In order to solve this problem,this paper proposes a global bilateral network semantic segmentation algorithm.In this algorithm,the large-scale convolution kernel is integrated into the BiSeNet Network,and the global path branches are added to the original spatial path and context path of the BiSeNet Network,so that the network can capture more context information.At the same time,the global pooling module in the attention optimization module and feature fusion module is replaced by the global convolution module to further improve the network acquisition.The experimental results show that the algorithm improves the MIoU index by 0.84% on Cityscaps dataset,and achieves better performance than BiSeNet Network.
Application of Multi-scale Dilated Convolution in Image Classification
WU Hao-hao and WANG Fang-shi
Computer Science. 2020, 47 (6A): 166-171.  doi:10.11896/JsJkx.190600179
Abstract PDF(5538KB) ( 1118 )   
References | Related Articles | Metrics
In order to reduce the loss of spatial information caused by down sampling,dilated convolution is often used instead of down-sampling in image classification based on deep learning.However,there is no literature on the performance difference of dilated convolution on different network layers.In this paper,a large number of image classification experiments have been carried out,and the best network layer suitable for dilated convolution has been found.However,the use of dilated convolution will lose the information of neighboring points,resulting in grid phenomenon and the loss of partial information of the image.In order to eliminate the grid phenomenon,this paper also proposes a method of constructing neural network by using multi-scale dilated convolution in the optimal network layer mentioned above.The experimental results show that the proposed network construction method achieves good results in image classification.
Semi-supervised Surgical Video Workflow Recognition Based on Convolution Neural Network
QI Bao-lian, ZHONG Kun-hua and CHEN Yu-wen
Computer Science. 2020, 47 (6A): 172-175.  doi:10.11896/JsJkx.190500154
Abstract PDF(2256KB) ( 1542 )   
References | Related Articles | Metrics
The real-time and robust open surgery workflow automatic detection will be the core component of the future artificial intelligent medical operation room.The key technology combined with other artificial intelligence technologies can help medical staff to automatically and intelligently complete a number of routine activities in the operation.However,the use of artificial intelligence and computer vision for surgical workflow recognition requires a large amount of data to be learned.In order to train this method,a large amount of labeled surgical video data is required.However,in the medical field,the labeling of surgical video data requires expert knowledge,and collecting enough numbers of marked surgical video data is difficult and time-consuming.Therefore,in this paper,the video data of laparoscopic cholecystectomy data is taken as the research obJect,the video spatial feature extraction is carried out by convolution self-encoder with semi-supervised learning method,and combined with a pair of video frames in the context of the same video for sequential feature extraction.The unstructured surgical video data is structured to build a bridge between the video characteristics of low-level surgery and the semantics of high-level surgical procedures,trying to realize the intelligent recognition of the surgical workflow at a low cost,and effectively determining the progress of the surgical workflow.Finally,the Jaccard coefficient of the proposed algorithm in this paper on a public dataset is 71.3% and the accuracy is 86.6%,achieving good experimental results.
Remote Sensing Image ObJect Detection Technology Based on Improved YOLO-V2 Algorithm
ZHANG Man, LI Jie, DING Rong-li, CHENG Hao-tian and SHEN Ji
Computer Science. 2020, 47 (6A): 176-180.  doi:10.11896/JsJkx.191100206
Abstract PDF(4252KB) ( 1232 )   
References | Related Articles | Metrics
Traditional method of remote sensing image obJect detection has the disadvantages of high time complexity and low precision.How to detect specific targets in remote sensing images quickly and accurately has become a hot research topic.In order to solve this problem,this paper improves the YOLO-V2 obJect detection algorithm,reduces the convolution layers and dimension,and combined with the ideal of feature pyramid to increase the detection features’ scale,so as to achieve the purpose of improving detection accuracy.At the same time,a general processing framework of remote sensing image obJect detection algorithm based on deep learning is presented to solve the problem that large remote sensing images cannot be directly processed.Comparison experiments on the DOTA dataset show that the improved YOLO-V2 algorithm has better accuracy and recall rate in 15 categories than the YOLO-V2 algorithm,and the mAP value is increased by 0.12.In terms of time complexity,it is slightly lower than the YOLO-V2 algorithm.Specifically,on 416×416 image patches,the detection time of the improved YOLO-V2 algorithm is reduced by 0.1 ms compared to the YOLO-V2 algorithm.
Sparse Representation Target Tracking Algorithm Based on Multi-scale Adaptive Weight
CHENG Zhong-Jian, ZHOU Shuang-e and LI Kang
Computer Science. 2020, 47 (6A): 181-186.  doi:10.11896/JsJkx.190500093
Abstract PDF(4387KB) ( 883 )   
References | Related Articles | Metrics
Target tracking is an important research field in computer vision.It is widely used in many aspects such as traffic navigation,autonomous driving and robotics.The generative model algorithm ASLA based on local sparse representation is fast and has high tracking accuracy,but it often loses its target in the face of complex tracking environment,such as target partial occlusion and dramatic change of target appearance.This paper analyzes the tracking principle of the original algorithm to get the cause of target tracking loss.Based on the ASLA algorithm,a three-point improvement method is proposed.1) Adaptive tracking of the target area size using multi-scale blocking method to obtain complementary target local information.2) In the feature pooling process of ASLA,block adaptive weight is modeled based on block reconstruction error to distinguish the discrimination information contained in different blocks,and introducing target occlusion information at different scales as weights in multi-scale pooling features.3)When the template is updated,the weight of the latest tracking results in subspace sparse representation is enhanced to make the updated template more similar to the recent tracking results,and improve the robustness of the algorithm.Experimental results show that the algorithm has higher tracking accuracy than algorithms such as ASLA in complex tracking environment,and can track the target in real time and accurately.
Automatic Voice Detection Algorithm for Schizophrenic Combining EHHT and CI
TIAN Wei-wei, ZHOU Yue, YIN Wang, HE Ling, DENG Li-hua and LI Yuan-yuan
Computer Science. 2020, 47 (6A): 187-195.  doi:10.11896/JsJkx.190900064
Abstract PDF(4853KB) ( 916 )   
References | Related Articles | Metrics
Through studying the clinical characteristics of schizophrenic speech,the experiment collected 686 vowel data samples from 14 schizophrenic patients and 793 vowel data samples from 14 healthy controls matched with gender,age and education level to establish a pathological voice database.Using the improved formant extraction algorithm combining Ensemble Hilbert-Huang Transform (EHHT) and Cepstrum Interpolation (CI) to obtain the acoustic feature parameter set reflecting emotion change of schizophrenic voice quality,then combined with the Support Vector Machine (SVM) classifier for classification,automatic voice detection of schizophrenic patients and the healthy controls is achieved.Besides,it designed experiments to discuss the influence of the four factors,that is,the number and variance of white noise,the number of IMF components and the window length,on the detection effect,and compared with the classical formant estimation methods.Experimental results show that the detection accuracy of the proposed algorithm can reach 98.8%.The patients with schizophrenia have a significant difference in the acoustical parameters of the formants represent the sound quality feature with the healthy controls,and it may provide a new obJective,quantitative and efficient indicator for the clinical assistant diagnostic research of schizophrenia.
Novel Threat Degree Analysis Method for Scattered ObJects in Road Traffic Based on Dynamic Multi-feature Fusion
WU Hong-tao, LIU Li-yuan, MENG Ying, RONG Ya-peng and LI Lu-kai
Computer Science. 2020, 47 (6A): 196-205.  doi:10.11896/JsJkx.190900066
Abstract PDF(6601KB) ( 974 )   
References | Related Articles | Metrics
Scattered obJects in road traffic may cause potential safety threat to transportation.In the context of industry application of auto-driving environment sensing,a novel threat-degree analysis method for abandoned obJect in road traffic based on dynamic multi-feature fusion was proposed in this paper.It realizes multiple vehicles tracking and the automatic analysis of the threat degree of the scattered obJects to vehicles in the driving area.In the proposed method,in order to extract the traffic characteristic parameters of the vehicles in foreground,firstly,the multi-vehicle tracking method was studied,and a novel tracking algorithm based on Camshift and identity data association was proposed.This algorithm records the identity data of the tracked vehicles by establishing the track list,real-time tracking of multi-vehicle targets in foreground can be realized.Then,the dynamic features of the vehicles are extracted based on the traffic characteristic parameters,the scattered obJects safety analysis modeling is established based on target tracking.A threat-degree analysis method of scattered obJects in road traffic was put forward by analyzing the dynamic multi-feature of the tracked vehicles.The proposed method not only overcomes the limitation of the environment sensing using only one dynamic feature,but also can accurately estimate the threat-degree of scattered obJects in road traffic to transportation by the multiple features fusion decision method.Finally,in order to verify the robustness and practicability of the proposed threat-degree analysis method,this paper designed an experiment that using the simulation and the real road video.The simulation video was simulated by 3dmax,and the real video was captured by CCD camera.The proposed algorithm was tes-ted on the software platform constructed by VS2008 and OpenCV,and the simulation figures were obtained by MATLAB2014.The resolution of video image was 320*240.The results show that the proposed method can accurately analyze the threat-degree of scattered obJects in road traffic,and utilizing the third party’s test perspective can broaden the application range of the particular vehicle’s safety area threat-degree analysis.By designing a safety threat-degree analysis model of the auto-vehicle’s surrounding environment,the proposed method can provide theoretical basis and technical support for on-board application of safe driving decision-making of auto-driving vehicles.
Remote Sensing Image Single Tree Detection Based on Active Contour Evolution Model
YE Yang, ZHOU Qi-zheng, SHEN Ying and FAN Jing
Computer Science. 2020, 47 (6A): 206-212.  doi:10.11896/JsJkx.191100138
Abstract PDF(4372KB) ( 978 )   
References | Related Articles | Metrics
Single-wood detection is a method of automatically or semi-automatically acquiring single tree information by combining remote sensing imagery with computer vision technology.Aiming at the phenomenon that a large number of trees cover each other in complex forest scenes,and the excessive extraction of crown vertices and the outline of crown caused by a large number of weak edges inside the crown,a remote tree image detection method based on active contour evolution model is proposed.The method divides the shadow control area based on the prior knowledge of the positive correlation between the number of shades and the number of trees,and uses the shape centroid as the crown apex.Then the morphological active contour evolution model (Snake model) optimized by the illumination angle is used to describe the crowncontour,so that it can cross the weak boundary point;finally optimize the crown profile according to the shape feature.The experimental results show that the method improves the accuracy of single tree wood information extraction in complex forest scenes,reduces the misrecognition rate of crown extraction process,and makes the crown contour shape more accurate.
Contaminated and Shielded Number Plate Recognition Based on Convolutional Neural Network
LI Lin, ZHAO Kai-yue, ZHAO Xiao-yong, WEI Shuai-qin and ZHANG Bing
Computer Science. 2020, 47 (6A): 213-219.  doi:10.11896/JsJkx.191100089
Abstract PDF(3911KB) ( 1179 )   
References | Related Articles | Metrics
As one of the important components of intelligent transportation,license plate recognition plays an irreplaceable role in people’s daily life.For example,in daily life,illegal vehicles often avoid punishment because of the number plate contamination and occlusion,which further increases the difficulty of law enforcement.Therefore,improving the recognition efficiency of contaminated license plate is still a crucial issue in today’s automatic recognition system.The paper mainly focuses on the recognition of shielded number plate.There are four main cases:normal number plate,partially shielded number plate,completely shielded number plate and not hanging plate.The traditional OCR algorithm has a high accuracy in the recognition of Chinese characters,characters and numbers.When it is applied to the recognition of license plates,although the detection of normal and partial shielded license plates shows a good recognition effect,the recognition effect of completely shielded number plates and not hanging license plates is still very poor.With the development of artificial intelligence,it is possible to get better recognition on completely shielded plates and not hanging plates.Therefore,combined with the advantages of traditional algorithms,this paper adopted OCR technology and the current deep learning algorithm to optimize the recognition effect of stained license plate.
Multi-threshold Segmentation for Color Image Based on Improved Tree-seed Algorithm
PENG Hao and HE Li-fang
Computer Science. 2020, 47 (6A): 220-225.  doi:10.11896/JsJkx.191000180
Abstract PDF(4192KB) ( 782 )   
References | Related Articles | Metrics
Multi-threshold segmentation for color image plays a very important role in various applications.Traditional multi-threshold segmentation algorithm has the problem that the segmentation time increases sharply with the increase of the number of threshold.To overcome the problem,this paper proposes a multi-threshold segmentation algorithm for color image based on improved tree-seed algorithm (ITSA),and takes OTSU as obJective functions.In order to improve the search speed and accuracy of the basic tree-seed algorithm (TSA),a new self- adaptive search tendency constant is presented to balance the ability of local search and global search.The performance of ITSA is tested on five basic test images and compared with TSA,particle swarm optimization (PSO) and differential evolution (DE) algorithm.Experimental results show that ITSA is better than TSA,PSO and DE algorithm on color image multi-threshold segmentation.The OTSU and ITSA based method is a good algorithm for colorima-ge multi-threshold segmentation.
Content-independent Method for Basis Image Extraction and Image Reconstruction
LAN Zhang-li, SHEN De-xing, CAO Juan and ZHANG Yu-xin
Computer Science. 2020, 47 (6A): 226-229.  doi:10.11896/JsJkx.200160009
Abstract PDF(2685KB) ( 788 )   
References | Related Articles | Metrics
As one kind of typical signals,an image can theoretically be composed of a series of basic signals.In order to find a set of basic signals to reconstruct images,a method for obtaining basis images based on feature extraction and reconstructing images from them is proposed.It makes possible to obtain the basis images from any set of images and to reconstruct images from the obtained ones because it is content-independent.The algorithm flow of extracting a series of basis images from the training set of images by feature extraction algorithm is described.The system of reconstructing the original image from the proJection coefficient and basis images by proJecting the set of test images into the space formed by the k basis images is developed.The experimental results show that,by controlling the number of basis images,the error and quality of reconstucted images can achieve higher requirement,and the method for basis images extraction and image reconstruction is content-independent.At the same time,this method plays an important role in the understanding of abstract features of images and the deep neural network.
Face Image Restoration Based on Residual Generative Adversarial Network
LI Ze-wen, LI Zi-ming, FEI Tian-lu, WANG Rui-lin and XIE Zai-peng
Computer Science. 2020, 47 (6A): 230-236.  doi:10.11896/JsJkx.190400118
Abstract PDF(5270KB) ( 1136 )   
References | Related Articles | Metrics
Benefiting from the rapid development of computer vision,face image restoration technology can only use the contour of the face to generate a complete face image.At present,many face restoration techniques based on convolutional neural networks and generative adversarial networks have been proposed.They can restore partial damaged face images or even directly generate face images using face contours.However,the results of qualitative and quantitative analysis of face images restored by these techniques are not ideal,and there are many limitations in the restoration process.Therefore,this paper proposes a face image restoration method based on residual generative adversarial network (FR-RGAN),which improves the performance of the model by means of deep convolution,residual network and smaller convolution kernels,and restores the local details of the face by using the contour of the face,making it more vivid.Experimental results show that,compared with pix2pix,FR-RGAN has an improvement of 8.7%,2.1% and 9.6% respectively in mean square error,peak signal to noise ratio and structural similarity index,and 53.4%,12.6% and 30.1% better than non-residual method.
Candidate Region Detection Method for Maritime Ship Based on Visual Saliency
LIU Jun-qi, LI Zhi and ZHANG Xue-yang
Computer Science. 2020, 47 (6A): 237-241.  doi:10.11896/JsJkx.191000196
Abstract PDF(4923KB) ( 799 )   
References | Related Articles | Metrics
Maritime ship detection technology has important civil and military value.Aiming at the problem of low accuracy of ship detection in complex sea scenes,a candidate region detection method for maritime ship based on visual saliency is proposed.In order to detect all the candidate regions of ships,the proposed method firstly uses Scharr edge detection operator to extract the edge contour features of salient targets,and then uses FT to obtain the final detection results of candidate regions based on the edge detection results.Experimental results on publicly available remote databases show that the proposed method gets good detection results in the detection of candidate regions of ships in a variety of complex marine scenes and realizes the quick extraction of the candidate regions of ships.
Fall Detection Algorithm Based on BP Neural Network
ZHOU Li-peng, MENG Li-min, ZHOU Lei, JIANG Wei and DONG Jian-ping
Computer Science. 2020, 47 (6A): 242-246.  doi:10.11896/JsJkx.191000077
Abstract PDF(2537KB) ( 1143 )   
References | Related Articles | Metrics
Fall is a very serious problem for the elderly.Real-time detection of whether the elderly fall or not is of great significance to reduce the inJury caused by falling.Therefore,a fall detection algorithm based on BP neural network is proposed in this paper.The algorithm collects human motion data with a six-axis sensor (MPU6050) worn at the waist,and uses a simple statistical method to extract features from the data.The extracted features are used as input neurons of BP neural network,and Levenberg-Marquardt algorithm is used to train the neural network model,so that it can realize the function of fall detection.Experimental results show that the algorithm can recognize falls well and the accuracy can reach 99.55%.
Surface Defect Detection Method of Industrial Products Based on Histogram Difference
YANG Zhi-wei, DAI Ming and ZHOU Zhi-heng
Computer Science. 2020, 47 (6A): 247-249.  doi:10.11896/JsJkx.191000049
Abstract PDF(3539KB) ( 782 )   
References | Related Articles | Metrics
With the rapid development of computer vision,human labor is gradually replaced by machine vision in product detection,especially in the production environment that workers should not stay long.Automatic detection of surface defects of industrial products is an inevitable trend of modern industry.In this paper,defect detection is regarded as a special image segmentation problem,and it is extracted by taking product surface as the background and surface defects as the foreground.In this paper,the segmentation is based on the difference between the gray distribution histogram of the foreground and the background,and the similarity between the background and prior background distribution histogram.Combining nonparametric statistical activity contour model and prior distribution,the gray distribution of the product surface is considered as the background prior information to construct the corresponding energy function,then the corresponding iteration equation of level set function is obtained by minimizing the energy function,so the defect detection can be more efficient.Experiments show that the proposed defect detection method is improved significantly in vision and numerical indexes such as detection accuracy,false alarm and missing detection.
New Representation of Facial Affect Based on Triangular Coordinate System
XIAO Xiao and KONG Fan-zhi
Computer Science. 2020, 47 (6A): 250-253.  doi:10.11896/JsJkx.190700081
Abstract PDF(2615KB) ( 833 )   
References | Related Articles | Metrics
Based on the triangular coordinate system,the generalized triangular coordinates are given to be used in facial expression feature representation.Combined with the Gaussian kernel SVM classifier,the left-face cross-validation technique is used to obtain the correct facial expression.For the CK+ facial expression database,the recognition rate is 98.2%,which is greatly improved compared with benchmark algorithm and M-CRT algorithm,indicating the effectiveness of the proposed facial expression feature representation method.
Comparative Study of DBN and CNN for Pulmonary Nodule Image Recognition
ZHANG Hua-li, KANG Xiao-dong, RAN Hua, WANG Ya-ge, LI Bo and BAI Fang
Computer Science. 2020, 47 (6A): 254-259.  doi:10.11896/JsJkx.190700107
Abstract PDF(2981KB) ( 910 )   
References | Related Articles | Metrics
Aiming at the classification and recognition accuracy and efficiency of pulmonary nodule images,CNN model and DBN model were used to classify pulmonary nodules,and the performance of different deep learning models in pulmonary nodule image classification was evaluated.Firstly,the experiment input the pre-processed training set and label into the CNN model and the DBN model respectively to achieve the purpose of training the models.Secondly,the test set was input into the parameter-optimized model,and the accuracy,sensitivity and specificity of the classification of the two models were compared.What’s more,the classification and recognition performance of the two models was analyzed.Finally,the two models were analyzed and compared based on the three indicators:classification accuracy,sensitivity and specificity,as well as time complexity.It is found that the CNN model is more advantageous in the classification and recognition of pulmonary nodules.
Automatic Tumor Recognition in Ultrasound Images Based on Multi-model Optimization
GU Wan-rong, FAN Wei-Jiang, XIE Xian-fen, ZHANG Zi-ye, MAO Yi-Jun, LIANG Zao-qing and LIN Zhen-xi
Computer Science. 2020, 47 (6A): 260-267.  doi:10.11896/JsJkx.191200011
Abstract PDF(3794KB) ( 1263 )   
References | Related Articles | Metrics
With the development of computer vision recognition technology,more and more researchers apply this technology to the recognition of tumor images.But because of the cost,many hospitals still use low-cost ultrasound and other equipment,resulting in ambiguity,artifacts and many similar tumor noise areas.The present method has high precision in clear image recognition,but it shows low accuracy and unstable result in ultrasonic image.The reason is that many existing algorithms misJudge the mo-dulus and noise image.In this paper,the key features of high-noise ultrasound images are obtained quickly and accurately by R-CNN and PRN methods,and the stability of recognition is ensured by data enhancement and morphological filtering.At the same time,the classification model of blood flow signal is fused to improve the recognition accuracy.Based on the data set of a real Thyroid neoplasm image,the proposed method is more accurate and stable than the new algorithm.
Computer Network
Survey on Technology and Application of Edge Computing
ZHAO Ming
Computer Science. 2020, 47 (6A): 268-272.  doi:10.11896/JsJkx.190600115
Abstract PDF(4400KB) ( 3198 )   
References | Related Articles | Metrics
Edge computing,as a new computing paradigm after cloud computing,migrates computation to the edge close to users and data sources,and provides data caching and processing functions with low latency,high security and location awareness.Starting from the Content Delivery Network of edge cache,this paper summarized the development of edge computing,the evolution process from the Content Delivery Network to cloud computing,fog computing and edge computing,and sorted out the relevant achievements from the perspective of academia and industry.Then,the three popular edge computing architectures were introduced,and the typical application scenarios of edge computing were summarized:Vehicle networking,Industrial production and Smart city.Finally,based on the military application background of naval battlefield,the architecture of command information system based on edge computing was proposed,and the future development trend and application direction were discussed.
Emergency Task Assignment Method Based on CQPSO Mobile Crowd Sensing
LI Jian-Jun, WANG Xiao-ling, YANG Yu and FU Jia
Computer Science. 2020, 47 (6A): 273-277.  doi:10.11896/JsJkx.190700040
Abstract PDF(2104KB) ( 832 )   
References | Related Articles | Metrics
In view of the problem of emergency task assignment in mobile crowd sensing task assignment type,and considering how to assign tasks under certain time constraints,with the lowest perceived cost and the maximum number of tasks,and extends it with a swarm intelligence algorithm,a method based on chaotic quantum particle swarm emergency task assignment (CQPSOETA) is proposed.Experimental results show that the chaotic quantum particle swarm optimization algorithm has a good application effect in the allocation of mobile crowd sensing emergency tasks.It can achieve the emergency task assignment optimization goal in a short time,greatly improve the convergence speed of the algorithm,avoid falling into local optimum,and obtain global optimal effect.
Clustering Single-hop Routing Protocol Based on Energy Supply for Wireless Sensor Network
FENG Jun, KONG Jian-shou and WANG Gang
Computer Science. 2020, 47 (6A): 278-282.  doi:10.11896/JsJkx.191100033
Abstract PDF(2271KB) ( 683 )   
References | Related Articles | Metrics
Aiming at the energy limitation of Wireless Sensor Networks,a cluster single-hop routing protocol based on supply energy with continuous energy supply for Wireless Sensor Networks is proposed to effectively solve the defect of insufficient energy supply of traditional routing protocols.The algorithm is carried out by cycles.Each cycle includes several stages such as cluster head number determination,cluster head selection mechanism,non-cluster head attribution and data transmission.It has the characteristics of high number of surviving nodes and low energy consumption of network.The simulation results show that,compared with the traditional Wireless Sensor Networks routing protocol,the proposed method has the advantages of large number of surviving nodes and less energy consumption of network,thus verifies the correctness and effectiveness of the proposed method.
Deep Learning Based Modulation Recognition Method in Low SNR
CHEN Jin-yin, CHENG Kai-hui and ZHENG Hai-bin
Computer Science. 2020, 47 (6A): 283-288.  doi:10.11896/JsJkx.190800072
Abstract PDF(2290KB) ( 1707 )   
References | Related Articles | Metrics
Modulation recognition of radio signals is the intermediate step of signal detection and demodulation.The existing research shows that deep learning technology can effectively identify the modulation types of radio signals.As for the low signal-to-noise ratio,there is still no good solution to the problem of the sharp drop in recognition accuracy.Inspired by the noise reduction in image fields,a modulation recognition method based on deep learning in Low SNR was proposed in this paper.It realizes the denoising of the low signal-to-noise ratio signals and solves the problem of the sharp drop.Through a large number of experiments in the open source datasets,the effectiveness of the proposed method was verified.The recognition accuracy of the low signal-to-noise ratio signals increased from 10% to 15%.Finally,we analyze the problems existing in the method and look forward to future research.
Research on Intelligent Multi-mode Gonverged Gateway Device Based on AMI
XIAO Yong, JIN Xin, WANG Li-bo and LUO Hong-xuan
Computer Science. 2020, 47 (6A): 289-293.  doi:10.11896/JsJkx.190800050
Abstract PDF(3579KB) ( 917 )   
References | Related Articles | Metrics
With the development of smart grid and the improvement of customer’s demand for power quality requirements,nowdays,a variety of communication technologies are suitable for automatic meter reading systems of power grid systems,and diffe-rent communication technologies have their own advantages and disadvantages,so they respectively support mainstream services of the power grid.This leads to problems that it is difficult to form mutual communication between different communication systems in the power grid system,and it is in convenient for unified management and resource scheduling.In response to these problems,this paper studied the multi-service multi-mode convergede gateway device based on AMI (Advanced Metering Infrastructure) intelligent measurement system.Firstly,it introduces the hardware and interface design of the converged gateway,and realizes unified access of various modules in different communication system through common interfaces.Secondly,the fusion gateway protocol system is analyzed,and the protocol parsing below layer L2 and the unified network transport protocol above layer L2 are designed.Finally,device implementation and related network testing are performed in typical application scenarios.
Spectrum Occupancy Prediction Model Based on EMD Decomposition and LSTM Networks
ZHAO Xiao-dong, SU Gong-Jin, LI Ke-li, CHENG Jie and XU Jiang-feng
Computer Science. 2020, 47 (6A): 294-298.  doi:10.11896/JsJkx.190700097
Abstract PDF(4391KB) ( 1130 )   
References | Related Articles | Metrics
Spectrum occupancy is an important basis to measure the spectrum utilization rate and reflect whether the spectrum allocation is reasonable.However,the unsteady spectrum occupancy sequence presents great challenges for effective prediction.In this paper,a new computing model (EMD-LSTM) combining EMD and LSTM is proposed.Firstly,the empirical mode decomposition (EMD) of the original occupancy sequence is used to generate the Intrinsic Mode Function (IMF) with different time scales,and then the highly correlated IMF is selected by Pearson correlation coefficient.Then,IMF and spectrum occupancy sequence are fused,and the occupancy sequence is predicted by using the long and short time memory network (LSTM).Simulation experiments and analysis show that,compared with the ordinary LSTM network,the new model has a great improvement in predicting the change of spectrum occupancy.
Model of Cartesian Product of Modulo p Residual Class Addition Group for Interconnection Networks
SHI Teng and SHI Hai-zhong
Computer Science. 2020, 47 (6A): 299-304.  doi:10.11896/JsJkx.190700047
Abstract PDF(1671KB) ( 667 )   
References | Related Articles | Metrics
Many applications require high computational density of the system,the computational density here refers to the computational power of a system in a certain volume or area.This is why a large number of distributed computing such as grid computing and cloud computing cannot completely replace supercomputing.Supercomputers are also widely used in emerging fields.Academician Chen Zuoning pointed out that the United States is developing an exascale supercomputer with a new advanced architecture (probably not a classical one),and China is also actively developing its own exascale supercomputer.Interconnection network is an important part of supercomputer architecture.Academician Chen pointed out that interconnection network is deci-sive to the performance-price ratio of the system.In this paper,a Cartesian product of modulo p residual class addition groups model for interconnection networks was designed,which can be used to characterize well-known interconnection networks such as hypercube and folded hypercube.More importantly,many new interconnection networks have been designed using this model.These new interconnection networks have their own characteristics and greatly enrich the seed bank of interconnection networks.
Minimum Storage Regenerating Code with Variable Parameters
WANG Xue-bing
Computer Science. 2020, 47 (6A): 305-309.  doi:10.11896/JsJkx.190600063
Abstract PDF(2535KB) ( 618 )   
References | Related Articles | Metrics
A functional repair minimum storage regenerating code with the parameters of (n,k,B,d,t) leverages the strategy of (n,k) erasure code to repair a number of t nodes malfunction with the help of d helper nodes.Considering the elements of storage space,repair bandwidth,and the number of repairable nodes,a functional repair regenerating code with parameters of (n1,k1,B,d1,t1) needs to be transformed into another functional repair regenerating code with parameters of (n2,k2,B,d2,t2),and hopefully the transforming process can be done with minimum data downloading.To this end,by combining logical nodes with physical nodes,a functional repair regenerating code with variable parameters is constructed.It is proved the code can be transformed between different parameters and the minimum download data is used in the transforming process.
Radio Modulation Recognition Based on Signal-noise Ratio Classification
CHEN Jin-yin, JIANG Tao and ZHENG Hai-bin
Computer Science. 2020, 47 (6A): 310-317.  doi:10.11896/JsJkx.190800073
Abstract PDF(2790KB) ( 1184 )   
References | Related Articles | Metrics
Radio modulation recognition has been widely used in various fields of military and civilian.Compared with the traditional methods such as artificial recognition and spectrum analysis,the modulation recognition method based on deep learning has better performance,but it still has the problem of low recognition accuracy.This paper proposed a modulation recognition method based on long-term and short-term memory network (LSTM) model.It combines deep learning classification method with SNR classification to design a SNR modulation recognition framework based on deep learning.By accurately classifying high and low SNR signals and using different denoising processing,the recognition accuracy of low SNR signal modulation is improved.The recognition accuracy of 2016.4c signal data set by machine learning method is 21%.Three modulation type identification comparison experiments,non-denoising,grading denoising and total denoising,are carried out on 2016.4C signal data set,the recognition accuracy is 69.82%,70.56%,and 66.67% respectively,which effectively verifies the feasibility and superiority of the proposed method to improve the accuracy of low SNR signal recognition.
Load Balancing Strategy of Distributed Messaging System for Cloud Services
GAO Zi-yan and WANG Yong
Computer Science. 2020, 47 (6A): 318-324.  doi:10.11896/JsJkx.191100012
Abstract PDF(3328KB) ( 827 )   
References | Related Articles | Metrics
Aiming at the problem of load skew between nodes in distributed messaging systems under cloud services,a dynamic load balancing strategy based on the role of replica is proposed and the algorithm is applied to Apache Kafka,the distributed streaming platform.Because the function of the messaging system is to read,write and store messages,the algorithm used CPU,disk and bytes in/out as the main load factors of nodes,and proposed the corresponding Leadership Movement strategy and Replica Movement strategy according to different load types.The feasibility of the algorithm is demonstrated from the perspectives of time cost,space cost and service availability,and the influence of parameters involved in the algorithm on the execution of the algorithm was discussed.Experiment results show that,the algorithm can achieve that the resource usage of each node in the cluster is not greater than the specified threshold.Compared with the default system,the standard deviation of cluster CPU occupancy rate decreases by 72.1%,the standard deviation of disk occupancy rate decreases by 86.1%,the standard deviation of bytes in rate decreases by 79.2%,and the standard deviation of bytes out rate decreases by 63.9%.The optimization effect is remarkable.
Information Security
Overview of Research on Image Steganalysis Algorithms
PENG Wei, HU Ning and HU Jing-Jing
Computer Science. 2020, 47 (6A): 325-331.  doi:10.11896/JsJkx.190600103
Abstract PDF(1785KB) ( 2569 )   
References | Related Articles | Metrics
Image steganography is the technique to hide sensitive or secret data in digital pictures transmitted on the Internet.It has gone through fast development during the past two decades,and is utilized by criminals including terrorists to exchange information which may threaten social security.Many kinds of image steganalysis techniques have been developed to fight back the threat.By examining the secret information hidden in the suspicious images,image steganalysis can provide digital forensic evidence.This paper firstly gave a survey on the research status of algorithms of image steganography,then introduced and summarized the image steganalysis techniques by classifying them into two categories:specialized algorithms and generalized algorithms.For specialized algorithms,the approaches designed for specific image steganography algorithms and specific image types are introduced respectively.For generalized algorithms,the general procedures of image steganalysis based on image features are described.Then several classes of image features used for image steganalysis are summarized.Furthermore,the techniques used in general image steganalysis including machine learning-based classification and feature selection are analyzed by reviewing the existing research work on image stenanalysis.At last,a brief discussion on future research directions of image steganalysis is presented.
Comparative Research of Blockchain Consensus Algorithm
LU Ge-hao, XIE Li-hong and LI Xi-yu
Computer Science. 2020, 47 (6A): 332-339.  doi:10.11896/JsJkx.191100189
Abstract PDF(2290KB) ( 3224 )   
References | Related Articles | Metrics
The consensus algorithm is the most important part of blockchain system,which directly affects the blockchain system’s efficiency,security and stability.According to different business scenarios,how researchers and developers choose or design an appropriate consensus algorithm is a big problem for the implementation of blockchain applications at the present stage.Based on the problem of Byzantine generals,this paper proposes the conditions that the consensus algorithm should meet in the design.Then,this paper divides the consensus algorithms into CFT consensus algorithm and BFT consensus algorithm according to the fault-tolerance type,describes the basic principles of the nine consensus algorithms in detail,and compares them from five aspects:fault-tolerance,performance efficiency,degree of decentralization,resource consumption and scale of use,and summarizes their advantages and disadvantages.It is expected to help researchers and developers select or design consensus algorithms and promote the application and evolution of block chain consensus algorithms.
Map Analysis for Research Status and Development Trend on Network Security Situational Awareness
BAI Xue, Nurbol and WANG Ya-dong
Computer Science. 2020, 47 (6A): 340-343.  doi:10.11896/JsJkx.190500169
Abstract PDF(4322KB) ( 1447 )   
References | Related Articles | Metrics
Taking 2456 papers on network security situational awareness included in Web of Science from 1999 to 2019 as data sources,and mainly using CiteSpace visualization tools,this paper analyzes the international research hotspots and research context in this field by analyzing cooperation between countries and institutions,literature co-citation,keyword co-occurrence.The research finds that the network security situation awareness needs to strengthen the theoretical formation of a system for further in-depth research.In terms of application,the research on multi-source data fusion is relatively mature,but it poses more research challenges to the visualization of real-time situational awareness.The analysis results are helpful for the researchers in this field to do further research.
Cryptanalysis of Cubic MI Multivariate Public Key Signature Cryptosystem
ZHANG Qi and NIE Xu-yun
Computer Science. 2020, 47 (6A): 344-348.  doi:10.11896/JsJkx.190900154
Abstract PDF(1711KB) ( 935 )   
References | Related Articles | Metrics
Cubic MI multivariate public key cryptosystem is an improvement of the classical multivariate public key cryptosystem MI.By increasing the degree of central mapping,the degree of public polynomial is promoted from quadratic to cubic to resist the Linearized Equation attack against MI system.The authors claim that the central mapping of the system satisfies the quadratic equation but has no effect on its security.However,through experimental analysis,for the public key cryptography constructed by its central mapping,after finding all the quadratic equations,the corresponding plaintext of the valid ciphertext can be recovered quickly by combining with the Grobner basis method.Simultaneously,it is also found that the complexity of the scheme instance to resist the minimum rank attack does not reach O(2222),but only O(2129).
Construction of Boolean Permutation Based on Derivative of Boolean Function
WU Wan-qing, ZHOU Guo-long and MA Xiao-xue
Computer Science. 2020, 47 (6A): 349-351.  doi:10.11896/JsJkx.190800124
Abstract PDF(2752KB) ( 687 )   
References | Related Articles | Metrics
The properties of Boolean functions derivative play a maJor role in the Cryptosystem structure.This paper proposes a new balanced Boolean function by using the properties of Boolean functions derivative.Then according to the relationship of ba-lanced Boolean functions and Boolean permutation,this paper constructs a new Boolean permutation.
SQL InJection Recognition Based on Improved BP Neural Network
ZHU Jun-wen
Computer Science. 2020, 47 (6A): 352-359.  doi:10.11896/JsJkx.191200054
Abstract PDF(3624KB) ( 702 )   
References | Related Articles | Metrics
At present,the attack defense system of SQL inJection type is mostly designed from the perspective of static single sentence filtering threat statements.In view of its low inJection statement recognition and high false positives,a double-layer SQL inJection defense model is proposed.After the inJection process is continuous,dynamic modeling analysis is carried out,and BP neural network is introduced for self-learning and self-correction.Experiments show that in the Apache+MySQL environment,this model has a high inJection recognition rate,which has certain advantages for the recognition of SQL inJection.
Study on Security of Industrial Internet Network Transmission
WU Yu-hong and HU Xiang-dong
Computer Science. 2020, 47 (6A): 360-363.  doi:10.11896/JsJkx.191000114
Abstract PDF(2776KB) ( 1371 )   
References | Related Articles | Metrics
Industrial Internet data must rely on network transmission for interoperability,and the security of network transmission is crucial for industrial Internet.It should be equipped with security mechanisms in the process of information transmission.Security control should be carried out in identity authentication of the subJect at both ends of the transmission,data encryption of the transmission,and transmission link node identification and authentication.This paper proposes corresponding measures to avoid the security risks involved in wired transmission medium and wireless transmission media of industrial Internet,and makes a thorough analysis on how to encrypt the data during transmission and how to choose cryptographic algorithms.It also puts forward the corresponding countermeasures for other aspects of network transmission.
Abnormal User Detection Method in Sina Weibo Based on User Feature Extraction
YUAN De-yu, ZHANG Yi-fan, GAO Jian and SUN Hai-chun
Computer Science. 2020, 47 (6A): 364-368.  doi:10.11896/JsJkx.190700008
Abstract PDF(3038KB) ( 2204 )   
References | Related Articles | Metrics
With the development of the Internet,Weibo has gradually become an important social media.However,in Weibo,abnormal users influence the behaviors of users by spreading harmful information,sending malicious links,and even launching malicious attacks,thus affecting the value of social networks.Therefore,it is important to realize the detection of abnormal users.Based on the Weibo abnormal users and normal user data sets obtained from multiple ways,this paper proposes to comprehensive extract and analyze various attributes of users.An abnormal user detection model is established through various data mining methods to identify abnormal user accounts.Experimental results of C4.5 decision tree and random forest algorithms show that by using the proposed method,the selected features are effective and the detection accuracy of abnormal users is high.
Manufacturing Alliance System Based on Block Chain
HONG Xiao-ling, WAN Hu, XIAO Xiao and SUN Hao-xiang
Computer Science. 2020, 47 (6A): 369-374.  doi:10.11896/JsJkx.190900122
Abstract PDF(2402KB) ( 1099 )   
References | Related Articles | Metrics
Manufacturing industry is the main body of national economy,but compared with the advanced countries,there are serious problems in our manufacturing industry.With the intensification of global competition and the rapid development of computer network technology,alliance mode has become a new organizational mode of enterprise development.For manufacturing companies,seeking cooperation and combining the traditional manufacturing mode and network manufacturing to Jointly cope with the fierce market is necessary to continue to develop in the future information society.For the manufacturing industry whose products have achieved standardized production,based on Dynamic Alliance,this paper proposes the concept of Static Alliance,which is composed of more than two independent enterprises and connected by network information technology.Enterprises establish Static Alliance to Jointly promote enterprise cooperation,transformation and upgrading,and achieve common development and win-win cooperation.In order to realize Static Alliance,this paper also puts forward the concept of Manufacturing Alliance System based on Block Chain (MASBC).MASBC is actually a network platform for realizing Static Alliance,with a five-layer architecture of physical layer,network consensus layer,data layer,server layer and user layer.MASBC is a combination of Manufacturing Execution System (MES) and block chain technology.Through the data acquisition function of MES and the data immutability of block chain,the production process information of the product is stored in the blockchain to guarantee the authenticity of information.This information will be taken as the basis for final profit distribution,so as to promote in-depth cooperation of alliance enterprises and achieve win-win situation.
L3 Cache Attack Against Last Round of Encryption AES Table Lookup Method
LU Yao, CHEN Kai-yan, WANG Yin-long and SHANG Qian-yi
Computer Science. 2020, 47 (6A): 375-380.  doi:10.11896/JsJkx.190900157
Abstract PDF(2144KB) ( 1366 )   
References | Related Articles | Metrics
According to the research status of Cache Side-Channel attacks,on machines equipped with Intel i5-4590 four-core,3.3GHz CPU processor,flush +flush timing attack is carried out on AES fast encryption method(AESFastEngine.Java) of Bouncy Castle JDK1.0 library in Linux system virtual environment.When the encryption process continues to execute,flush+flush method is used to traverse the shared main memory address to detect the active address set (s-box address),and then the S-box offsets is found to monitor table entries in the s-box offset.Select ciphertext value corresponding to shorter flush+flush time from all ciphertexts,and restore the last round key value with the table entry value of S box,that is,the key value used in the last round can be restored by determining the usage of entries in S-box.This method needs a large number of known ciphertext,and can accurately calculate the offsets of S-box and the last round key values.
HTTPS Encrypted Traffic Classification Method Based on C4.5 Decision Tree
ZOU Jie, ZHU Guo-sheng, QI Xiao-yun and CAO Yang-chen
Computer Science. 2020, 47 (6A): 381-385.  doi:10.11896/JsJkx.191200155
Abstract PDF(1987KB) ( 1630 )   
References | Related Articles | Metrics
The HTTPS protocol is based on the HTTP protocol that does not have an encryption mechanism.By combining with the SSL/TLS protocol,an SSL/TLS handshake is performed between the client and the server before the data is transmitted,and the cipher suite used in the communication process is negotiated to securely exchange secret keys and implement mutual authentication.After establishing a secure communication line,the HTTP application protocol data is encrypted and transmitted,preventing the risk of eavesdropping and tampering of the communication content.The traditional payload-based method can’t handle encrypted traffic.The classification and analysis of encrypted traffic based on traffic characteristics and machine learning have become the mainstream method.By establishing a supervised learning model,based on network flow data feature engineering,under the condition of ensuring encryption integrity,the C4.5 decision tree algorithm is applied in the LAN environment to analyze the application of HTTPS encrypted data transmission stream in Tencent network,which can effectively realize accurate classification of the website HTTPS encrypted traffic.
Sanitizable Signature Scheme Based on Ring Signature and Short Signature
ZHANG Jun-he, ZHOU Qing-lei and HAN Ying-Jie
Computer Science. 2020, 47 (6A): 386-390.  doi:10.11896/JsJkx.190500061
Abstract PDF(2604KB) ( 903 )   
References | Related Articles | Metrics
Among the existing sanitizable signature schemes that achieve full security requirements,schemes based on group signatures are not practical due to their low efficiency,while those based on zero-knowledge proof are more efficient,but the security is poor.Therefore,this paper proposes a new sanitizable signature scheme based on ring signature and short signature.It can meet the five fundamental security requirements of sanitizable signatures,i.e.,unforgeability,immutability,transparency,full privacy and auditability.Meanwhile,it has stronger auditability and higher computational efficiency than the zero-knowledge proof based scheme,and is more practical.
Trust Collection Consensus Algorithm Based on Gossip Protocol
ZHANG Qi-wen, WANG Zhi-qiang and ZHANG Yi-qian
Computer Science. 2020, 47 (6A): 391-394.  doi:10.11896/JsJkx.191000051
Abstract PDF(1945KB) ( 1055 )   
References | Related Articles | Metrics
The consensus algorithm is the basis for constructing the trust characteristics of the blockchain.How to ensure its efficiency and stability has been a hot topic in the research field.The Gossip protocol is widely used as the underlying framework of consensus algorithms because of its efficiency and scalability.However,the communication methods between the traditional Gossip protocol nodes are random,which makes the stability of consensus time insufficient,and because the consensus time cannot be predicted,it cannot be applied in occasions with strong consistency.In order to solve the problem of insufficient stability and final consensus in Gossip protocol,a trust collection consensus algorithm based on Gossip protocol is proposed.The node selects the communication node by evaluating the information degree of the neighboring node,and the message collects the trust value in the communication process,the message is not considered to be in consensus until the threshold is greater than the critical threshold of the whole network.At the same time,the time degradation factor is used to control the node information degree,to prevent the occurrence of hot spots and maintain network load balancing.Experiments show that the CCG algorithm has the advantages of high stability and efficiency compared with the traditional and Random Gossip algorithms.
(α,k)-anonymized Model for Missing Data
ZHANG Wang-ce, FAN Jing, WANG Bo-ru and NI Min
Computer Science. 2020, 47 (6A): 395-399.  doi:10.11896/JsJkx.190500131
Abstract PDF(2569KB) ( 737 )   
References | Related Articles | Metrics
Before a dataset is published,the quasi-identifier attributes of the dataset need to be anonymous in case of a link attack.However,the existing data anonymity algorithms are all oriented to complete data,and the tuples containing defective data in the data set will be deleted directly,which reduces the availability of data.In this paper,the missing data and intact data are mixed into an anonymous algorithm,and the (α,k)-anonymous algorithm is combined.The experiment data fully prove that the improved defective data oriented (α,k)-anonymous model effectively improves the availability of the anonymous data and realizes the data anonymity.
New Approach for Graded and Classified Cloud Data Access Control for Public Security Based on TFR Model
GU Rong-Jie, WU Zhi-ping and SHI Huan
Computer Science. 2020, 47 (6A): 400-403.  doi:10.11896/JsJkx.191000066
Abstract PDF(1713KB) ( 790 )   
References | Related Articles | Metrics
In recent years,the development of big data for public security is accelerating.The unified construction of public security data centers around the country has brought about high centralization of sensitive data,thus the risk of leakage of information regarding national security and illegal use of personal information is sharply increasing.On the basis of traditional data security protection methods such as data encryption and role-based access control,this paper presents a new access control model based on data grade and classification.Based on the grading and classification of data sensitivity,personnel and data,this model can achieve hierarchical control based on the level of data table,data field and data record,which is helpful to achieve precise access authorization control of grading and classification for sensitive public security data with higher flexibility and finer granularity,and can be effectively applied to the construction of data access security control system of modern big data cloud platform for smart public security.This model has been applied to the construction of smart public security in some areas and has achieved satisfied results.
Virtual Network Function Deployment Strategy Based on Software Defined Network Resource Optimization
HUANG Mei-gen, WANG Tao, LIU Liang, PANG Rui-qin and DU Huan
Computer Science. 2020, 47 (6A): 404-408.  doi:10.11896/JsJkx.191000116
Abstract PDF(2173KB) ( 1031 )   
References | Related Articles | Metrics
With the continuous development of software-defined network and network function virtualization technology,hardware middleware such as firewall and intrusion detection is replaced by virtual network functions dynamically deployed on specific servers.In addition,in order to meet the traffic security and performance policy,network traffic requests usually need to go through a specific VNF sequence,known as service function chain,which makes the dynamic deployment of VNF become a hot topic in software defined network.Many deployment strategies have been proposed in the academic circle.However,most of the deployment studies are conducted under the constraint of a single resource,and the load balancing of global network resources cannot be achieved.Therefore,this paper proposes a virtual network function deployment strategy that fully considers the global network resources.Firstly,the whole structure of the network model is given,and an integer linear programming model is introduced for mathematical modeling.Then,an improved model solving algorithm is proposed,which can effectively utilize network resources and achieve load balancing under the constraint of global network resources.Finally,the simulation results show that the proposed deployment algorithm can reduce the load balance and improve the request reception rate.
Optimization and Design of PTN Network Security Structure
YAN Zhen, TIAN Yi, DUAN Zhi-guo, YU Zhen-Jiang, WANG Yu and ZHA Fan
Computer Science. 2020, 47 (6A): 409-412.  doi:10.11896/JsJkx.190900160
Abstract PDF(3458KB) ( 1213 )   
References | Related Articles | Metrics
PTN (Packet Transport Network) can be compatible with a variety of networks,such as ATM,SDH,Ethernet,PDH,PPP/HDLC,and is widely used in various networking communications.PTN can organically combine data technology and transmission technology,enabling the operator’s basic network advantages to be greatly improved.At present,PTN cannot meet the needs of users in terms of multi-network communication,transmission bandwidth,traffic and information security.This paper designs a new PTN network security architecture.It selects the architecture type according to service and traffic,and introduces the OTN core networking and p-Cycle protection algorithm.By using the OTN core networking solution,the bandwidth and data transmission capability of the aggregation room is greatly improved.The device has a strong capacity to expand.By using the p-Cycle protection algorithm and building a network topology map,the security performance of PTN network information data transmission is improved.This paper also designs the PTN networking software architecture,which is convenient for users to apply and query.Experiments show that the designed scheme greatly reduces the difficulty of networking,improves network speed and service quality,and has better network scalability and security.
SIP Authentication Key Agreement of Protocol Based on Certificateless
MO Tian-qing and HE Yong-mei
Computer Science. 2020, 47 (6A): 413-419.  doi:10.11896/JsJkx.191100216
Abstract PDF(1916KB) ( 891 )   
References | Related Articles | Metrics
The existing authentication schemes for SIP do not resist the attack of ephemeral secrets reveal,and have high system costs.A safe communicated model of SIP network based on improved SIP was proposed.Basing on the primary flow of SIP and combing with certificateless signcryption,a new anonymous two-party authenticated key agreement protocol was adopted to achieve mutual authentication and unlinkability.Furthermore,the new protocol is provably secure in the seCK model under the CDH assumption.Comparative analysis results show that our protocol is very simple and efficient.
Publicly Traceable Accountable Ciphertext Policy Attribute Based Encryption Scheme Supporting Large Universe
MA Xiao-xiao and HUANG Yan
Computer Science. 2020, 47 (6A): 420-423.  doi:10.11896/JsJkx.190700131
Abstract PDF(1615KB) ( 716 )   
References | Related Articles | Metrics
Ciphertext policy attribute-based encryption can achieve one-to-many encryption flexibly.Especially,the large universe attribute-based encryption can support unbounded attribute universe,and has extensive applications in cloud computing,big data,etc.However,owing to the fact that a private decryption key may correspond to different users,thus malicious users dare to share their decryption privileges to others for profits.To solve this problem and publicly verify the identity of a leaked secret key,this paper proposes an accountable attribute based encryption scheme that supports large universe.The proposed scheme can support LSSS realizable access structures.In addition to the fixed-length system public parameters,the identity of the user who leaks the encryption key can be publicly verified without considering the constant storage cost.
Database & Big Data & Data Science
Research on Prediction of Re-shopping Behavior of E-commerce Customers
LV Ze-yu, LI Ji-xuan, CHEN Ru-Jian and CHEN Dong-ming
Computer Science. 2020, 47 (6A): 424-428.  doi:10.11896/JsJkx.190900018
Abstract PDF(4048KB) ( 1357 )   
References | Related Articles | Metrics
The study of customers’ shopping behavior is a trending research topic and has great commercial value for e-commerce companies.This paper studies the prediction of customer’s re-shopping behavior on the same e-commerce platform.Through the analysis of shopping related actions of customers and transaction records between customers and merchants,a variety of different behavior features are designed based on feature engineering principles,and the importance and characteristics of the prediction features are analyzed by using visualization approaches.Then,based on the proposed predictive features,a variety of different algorithms are used to train the prediction models.Experimental research shows that the multi-lightGBM model ensemble method can achieve high prediction accuracy,and the AUC value can reach 0.7018.Meanwhile,the predictor only needs a few features to obtain very good prediction results.The experimental data set studied in this paper is an open source big data collected in real environment,and the research conclusions have both application and academic value.
Application of Improved GHSOM Algorithm in Civil Aviation Regulation Knowledge Map Construction
ZHANG Hao-yang and ZHOU Liang
Computer Science. 2020, 47 (6A): 429-435.  doi:10.11896/JsJkx.190700161
Abstract PDF(2460KB) ( 639 )   
References | Related Articles | Metrics
Aiming at the problems that the number of clusters cannot be dynamically changed and the text classification results are not accurate enough during the text clustering process,this paper introduces and improves the Growing Hierarchical Self-Organizing Map (GHSOM) algorithm to improve text clustering accuracy,and tries to use the improved GHSOM algorithm to build a knowledge map of civil aviation regulations.The GHSOM algorithm has a multi-level hierarchical structure,and each layer contains several independent growing SOMs.Through the growth of the scale,the data set is described in more detail to a certain extent,and the classification effect is improved.Based on this,taking various laws and regulations in the field of civil aviation as the sample data set,combined with Chinese word segmentation,keyword extraction,file vector and other technical means,the text is clustered and analyzed using the improved GHSOM algorithm,and finally the construction of civil aviation regulation knowledge map is completed.Experimental results show that the proposed algorithm has significant text clustering ability.The civil aviation regulation knowledge map constructed by this algorithm has achieved good classification results,and its evaluation indicators such as accuracy and recall rate have been further improved.
Attribute Reduction Methods of Formal Context Based on ObJect (Attribute) Oriented Concept Lattice
YUE Xiao-wei, PENG Sha and QIN Ke-yun
Computer Science. 2020, 47 (6A): 436-439.  doi:10.11896/JsJkx.191100011
Abstract PDF(1628KB) ( 707 )   
References | Related Articles | Metrics
Attribute reduction of formal context is one of the important research topics of formal concept analysis.This paper is devoted to the discussion of attribute reduction methods preserving the structures of obJect-oriented concept lattice and property-oriented concept lattice.By the analysis of the related granular concepts,this paper proposes a new Judgement theorem for consistent set based on obJect-oriented concept lattice and attribute-oriented concept lattice.Then,new discernible attribute sets and discernible attribute matrices are established.The attribute reductions preserving the structures of obJect-oriented concept lattice and property-oriented concept lattice are calculated by using the conversion of Boolean logic formula.The proposed method can avoid computing all obJect-oriented and attribute-oriented concept lattices.In addition,the characteristics of attributes with respect to obJect-oriented concept lattice and property-oriented concept lattice are proposed.Some equivalent descriptions of absolutely necessary attributes,relatively necessary attributes,and absolutely unnecessary attributes are provided.
Novel Clustering Algorithm Based on Timing-featured Alarms
DENG Tian-tian, XIONG Yin-qiao and HE Xian-hao
Computer Science. 2020, 47 (6A): 440-443.  doi:10.11896/JsJkx.190600173
Abstract PDF(3006KB) ( 1278 )   
References | Related Articles | Metrics
In the cloud environment,large-scale cluster equipments will generate massive timing-featured alarms.In the practical application,operational personnel generally uses these alarms to locate,check and repair the faults and errors,and maintains the normal operation of the systems.So how to efficiently cluster the alarms and mine the key information will be core issues to keep continuous and stable operation of the cloud.Therefore,this paper proposes a novel clustering algorithm based on timing featured alarms.The algorithm constructs a new relation matrix by utilizing time difference between any two alarms in the given time window,then takes advantage of K-means algorithm to cluster the column vectors in the relation matrix,to get the cluster result of alarms.Experiment result shows that the algorithm can cluster massive alarms efficiently.
PM2.5 Concentration Prediction Method Based on CEEMD-Pearson and Deep LSTM Hybrid Model
DING Zi-ang, LE Cao-wei, WU Ling-ling and FU Ming-lei
Computer Science. 2020, 47 (6A): 444-449.  doi:10.11896/JsJkx.190700158
Abstract PDF(2875KB) ( 763 )   
References | Related Articles | Metrics
PM2.5 is well-known as the key indicator for measuring the concentration of air pollutants.It is of great significance for both academic study and applications to make accurate prediction of future PM2.5concentration values by excavating the time series characteristics of PM2.5historical data.However,the correlation of time series data of the original PM2.5concentration value has great influence on the prediction accuracy of the model.In order to solve this problem,a PM2.5concentration prediction me-thod based on CEEMD-Pearson and deep LSTM hybrid model was proposed in this paper.The CEEMD modal decomposition me-thod is adopted to decompose the PM2.5concentration historical data at different frequencies,and to enhance the timing characte-ristics of the data.Then,the Pearson correlation test method is used to screen the different frequency IMFs after decomposition,and the filtered enhancement data is input to the input layer of the deep LSTM network of multiple hidden layers for training and prediction.Experimental data shows that the prediction accuracy of the CEEMD-LSTM hybrid mo-del is 80%.However,the model converges after 7000 training times.While by means of the secondary screening of Pearson correlation test,the model converges after 800 training times,and the prediction accuracy is improved to 87%.At last,the hybrid model combines CEEMD-Pearson with deep LSTM neural network has the best training effect.It converges after 650 training times,and the prediction accuracy reaches 90%.Experimental results show that the CEEMD modal decomposition method can show the hidden time series characteristics in historical data.The secondary screening combined with Pearson correlation analysis can effectively improve the convergence speed and prediction accuracy of model training.Therefore,based on the CEEMD-Pearson and deep LSTM hybrid models,the best training result,the fastest convergence speed and the most accurate prediction result can be obtained,which can effectively solve the PM2.5concentration prediction problem.
Classification Algorithm of Distributed Data Mining Based on Judgment Aggregation
LI Li
Computer Science. 2020, 47 (6A): 450-456.  doi:10.11896/JsJkx.190700143
Abstract PDF(3069KB) ( 726 )   
References | Related Articles | Metrics
With the development of Internet and the wide application of cloud computing,many data sets are stored on different servers,and the distributed data mining comes into being.Each agent gets partial data mining results on its respective site,and distributed data mining could aggregate this part of mining results into a global decision.This paper is focused on the classification issue in the process of distributed data mining.Aiming at some specific data are stored in difference data source,this paper puts forward a classification algorithm based on the Judgment aggregation model.Each agent should give its Judgment whether a new case belongs to a certain target class,and then use the Judgment aggregation model to aggregate the Judgments of these agents to form a global classification.This algorithm combines logic and social choice theory technologies together and applies them to the classification problem in distributed data mining.It doesn’t need to transfer and transform the data on a large scale,thus saving the transmission cost and improving the efficiency of classification.At the same time,it effectively protects the data security.
Selective Clustering Ensemble Based on Xie-Beni Index
SHAO Chao and MA Jin-Jia
Computer Science. 2020, 47 (6A): 457-460.  doi:10.11896/JsJkx.190700044
Abstract PDF(2594KB) ( 886 )   
References | Related Articles | Metrics
Selective clustering ensemble is to select some of the basic clustering results with high accuracy and large diversity for integration,so as to obtain more effective clustering ensemble results.In the cluster analysis application,the cluster validity index is used to measure the goodness of the clustering results.In this paper,a selective clustering ensemble algorithm based on Xie-Beni index is proposed.The algorithm uses Xie-Beni index to measure the validity of the basic clustering results,and uses NMI(normalized mutual information) to select the better basic clustering results to enhance the aggregation,thereby improving the accuracy of the clustering results.Experimental results confirm the effectiveness of the algorithm.
Multi-obJective Evolutionary Algorithm Based on Community Detection Spectral Clustering
DONG Ming-gang, GONG Jia-ming and JING Chao
Computer Science. 2020, 47 (6A): 461-466.  doi:10.11896/JsJkx.191100215
Abstract PDF(2420KB) ( 1021 )   
References | Related Articles | Metrics
The multi-obJective optimization algorithm is competitive in the discovery of complex network communities.However,it is difficult to obtain the satisfied results while dealing with the problem of fuzzy community structure and large scale of network data.To overcome the shortcomings of existing multi-obJective methods,a multi-obJective complex network community discovery algorithm based on spectral clustering is proposed.The proposed algorithm uses spectral clustering to perform initial po-pulation partitioning on the encoded complex network,and exploits its subgraph clustering characteristics to obtain a better initial population.A data reduction method based on grid reduction is applied to reduce the population in the process of evolution,which effectively reduces the complexity of the algorithm.The experimental results on the simulation network and the real network show that the proposed algorithm outperforms than that of the other three representative multi-target based community discovery algorithms in terms of community discovery performance and computational complexity.
Model for Stock Price Trend Prediction Based on LSTM and GA
BAO Zhen-shan, GUO Jun-nan, XIE Yuan and ZHANG Wen-bo
Computer Science. 2020, 47 (6A): 467-473.  doi:10.11896/JsJkx.190900128
Abstract PDF(2758KB) ( 3373 )   
References | Related Articles | Metrics
How to make an accurate financial time series prediction is one of the important quantitative financial problems.Long and short term memory neural network (LSTM) has solved the complex serialized data learning problems such as stock prediction much better.However,the results of previous studies show that there are still some problems such as unbalanced prediction and local minimum value,which lead to poor prediction ability.Based on the above problems,the genetic algorithm (GA) is used to solve the parameter adJustment problem to ensure the balance of model prediction,and a new stock prediction model is constructed.First,LSTM neural network is used to predict closing price.Then,the prediction results are calculated to the Judgment method based on genetic algorithm.Finally,the predicted stock’s ups and downs signals are gained as the output.This model is different from the previous state-of-the-art and is mainly improved for the output module of the LSTM model.High-frequency trading data of Index China are used for verification.The results show that the improved model is better than the LSTM model.
Research on HBase Configuration Parameter Optimization Based on Machine Learning
XU Jiang-feng and TAN Yu-long
Computer Science. 2020, 47 (6A): 474-479.  doi:10.11896/JsJkx.190900046
Abstract PDF(5314KB) ( 781 )   
References | Related Articles | Metrics
HBase is a distributed database management system.For applications that require fast random access to large amounts of data,it is becoming increasingly popular.However,it has many performance-critical configuration parameters that can interact with each other in complex ways,making it extremely difficult to adJust them manually for optimal performance.In this paper,a new method is proposed to automatically tuning the configuration parameters of a given HBase application,called auto-tuning HBase.The key is to build a low-cost performance model with configuration parameters as input.Therefore,different modeling techniques are systematically studied,and the integrated learning algorithm is used to construct the performance model.Then the genetic algorithm is used to search for the optimal configuration parameters for the application through the performance model.As a result,it can quickly and automatically identify a set of configuration parameter values to maximize application performance.By testing the 5 applications with Yahoo! cloud service benchmark,experimental results show that,compared with the default configuration,the optimized throughput increases by 41% on average and can be up to 97%.At the same time,delays in HBase operations decrease by an average of 11.3% to as high as 57%.
Improved Locality and Similarity Preserving Feature Selection Algorithm
LI Jin-xia, ZHAO Zhi-gang, LI Qiang, LV Hui-xian and LI Ming-sheng
Computer Science. 2020, 47 (6A): 480-484.  doi:10.11896/JsJkx.20190800095
Abstract PDF(2838KB) ( 791 )   
References | Related Articles | Metrics
LSPE (Locality and similarity preserving embedding) feature selection algorithm firstly maintains the locality of the data based on the pre-defined graph structure of the KNN,and then maintains the locality and similarity of the data based on the low-dimensional reconstruction coefficients that define the learning data of the graph.The two steps are independent and lack of interaction.Since the number of nearest neighbors is artificially defined,the learned graph structure does not have adaptive nearest neighbors and is not optimal,which will affect the performance of the algorithm.In order to optimize the performance of LSPE,an improved locality and similarity preserving feature selection algorithm is proposed.The proposed algorithm incorporates graph learning,sparse reconstruction and feature selection into the same framework,making graph learning and sparse coding are carried out simultaneously.The coding process is required to to be sparse,adaptive neighbor and non-negative.The goal is to find a proJection that can maintain the locality and similarity of the data,and apply a l2,1-norm to the proJection matrix,and then select the relevant features that can maintain locality and similarity.Experimental results show that the improved algorithm reduces the subJective influence,eliminates the instability of selecting features,is more robust to data noise,and improves the accuracy of ima-ge classification.
Medium and Long-term Population Prediction Based on GM(1,1)-SVM Combination Model
XU Xiang-yan and HOU Rui-huan
Computer Science. 2020, 47 (6A): 485-487.  doi:10.11896/JsJkx.190900168
Abstract PDF(2903KB) ( 1013 )   
References | Related Articles | Metrics
Accurate prediction of future population is of practical significance for the formulation of relevant economic policies.In this paper,a combined prediction model of grey and support vector machine is contructed according to the characteristics of complicated influencing factors of medium and long-term prediction,less available historical data,and the limitations of single model.The model combines the grey prediction model with the support vector machine model and uses the standard deviation method to determine the weight information.The model is applied to the medium and long-term prediction of the population of Alar City,and the population data of the first division of Alar City from 1997 to 2017 is selected for analysis,to predict the data 2018 to 2022.The result shows that,compared with the single model,the combined model has higher prediction accuracy and lower relative error,and the prediction result is relatively stable and more realistic.
New Associative Classification Algorithm for Imbalanced Data
CUI Wei, JIA Xiao-lin, FAN Shuai-shuai and ZHU Xiao-yan
Computer Science. 2020, 47 (6A): 488-493.  doi:10.11896/JsJkx.190600132
Abstract PDF(1841KB) ( 847 )   
References | Related Articles | Metrics
The rule-based classification algorithms,which have good classification performance and interpretability,have been widely used.However,the existing rule-based classification algorithms do not consider the case of imbalanced data,thus affect their classification effect on imbalanced data.In this paper,a new associative classification algorithm ACI for imbalanced data is proposed.Firstly,all the association rules are generated.Then,the rules are pruned by an imbalanced rule pruning method.Finally,the remaining rules are saved in a CR Tree for new instance classification.Experimental results on 27 public data sets show that the proposed algorithm performs better than the compared algorithms.
Adaptive High-order Rating Distance Recommendation Model Based on Newton Optimization
ZOU Hai-tao, ZHENG Shang, WANG Qi, YU Hua-long and GAO Shang
Computer Science. 2020, 47 (6A): 494-499.  doi:10.11896/JsJkx.190900016
Abstract PDF(2264KB) ( 595 )   
References | Related Articles | Metrics
Some existing recommendation algorithms introduce latent factor model to overcome the problems caused by data scarcity,so as to provide more effective recommendations for users.In general,those methods construct an optimization function to achieve the minimum rating error or maximum preference,etc,by integrating several polynomials with the corresponding parameters to balance each part,and use stochastic gradient descent to solve this function.Nevertheless,the above mentioned models only consider the difference between the estimated and real ratings of the same user-item pair (i.e.,the first-order rating distance),and ignore the difference between the estimated and real ratings of the same user across different items (i.e.,the second-order rating distance).Hence,high-order rating distance model,HoORaYs,with good accuracy in terms of item ranking and predictive ratings which takes these two kinds of distances into account is proposed.Unfortunately,this model still has some flaws in adap-tability and efficiency due to its manually setting parameters,its non-convergence.Aiming at improving the recommendation adap-tability and efficiency,an adaptive high-order rating distance model which integrates a data scale sensitive function is proposed.It utilizes Newton method to solve the convex optimization problem about rating distance.This method not only eliminates manually setting parameters,but also accelerates the optimization function convergence speed.The proposed model has a solid theoretical support.Experiments on three real datasets show that,it has good prediction accuracy and operation efficiency.
Underwater Image Reconstruction Based on Improved Residual Network
SONG Ya-fei, CHEN Yu-zhang, SHEN Jun-feng and ZENG Zhang-fan
Computer Science. 2020, 47 (6A): 500-504.  doi:10.11896/JsJkx.200100084
Abstract PDF(3491KB) ( 956 )   
References | Related Articles | Metrics
Natural environment factors such as turbulence and suspended particles in water imaging can causeimage distortion,low resolution,and fuzzy background of underwater acquisition.In order to solve the above problems and further improve the quality of image reconstruction and rehabilitation,this paper puts forward an improved image super-resolution reconstruction based on residual network method.This method will in residual dense network of fusion and adaptive mechanism,effectively solve the deep learning gradient explosion problems often encountered in network,also can inhibit learning of useless information,make full use of the important feature information.In order to adapt the network to the underwater noise environment,a self-built underwater system is used to collect the target plate in clear water and turbid micro-turbulent waters respectively,and the training pair of image generation is performed on the target plate,and the test set of image generation is collected under rivers and ocean waters.The experimental results show that in the micro-turbulent ocean and river waters,compared with the traditional underwater image processing and neural network algorithm,the improved residual network algorithm can reconstruct the underwater ima-ge very well.
New Method of Data Missing Estimation for Vehicle Traffic Based on Tensor
ZHANG De-gan, FAN Hong-rui, GONG Chang-le, GAO Jin-xin, ZHANG Ting, ZHAO Peng-zhen and CHEN Chen
Computer Science. 2020, 47 (6A): 505-511.  doi:10.11896/JsJkx.190700045
Abstract PDF(3317KB) ( 975 )   
References | Related Articles | Metrics
In the face of the current huge amount of intelligent traffic data,collecting and statistical processing is a necessary and important process,but the problem of inevitable data missing is the current research focus.Aiming at the problem of vehicle traffic data missing,this paper proposed a new method based on tensor for vehicle traffic data missing estimation,Integrated Bayesian tensor decomposition (IBTD).In the data model construction stage,the random sampling principle was used to randomly extract the missing data to generate a subset of data,and the optimized Bayesian tensor decomposition algorithm was used for interpolation.By introducing the integration idea,the error results after multiple interpolations were analyzed and sorted,consider the spatio-temporal complexity,and choose the optimal average to get the best result.The performance of the proposed model was evalua-ted by mean absolute percentage error (MAPE) and root mean square error (RMSE).Experimental results show that the proposed method can effectively interpolate the traffic datasets with different missing quantities and get good interpolation results.
Research on Premium Income Forecast Based on X12-LSTM Model
DIAO Li and WANG Ning
Computer Science. 2020, 47 (6A): 512-516.  doi:10.11896/JsJkx.191100077
Abstract PDF(3053KB) ( 898 )   
References | Related Articles | Metrics
Under the new normal of economy,the prediction of premium income is a topic of common concern in academia and industry.Considering the strong seasonality of the time series data of premium income,an X12-LSTM model based on long short-term memory neural network is constructed to predict premium income,and compared with simple LSTM model,SARIMA model and BP neural network in this paper.Experimental results show that X12-LSTM model is the most accurate and stable model to predict premium income.Compared with simple LSTM model,the X12-LSTM model achieves an improvement of 8% in accuracy and 8% in stability,which shows that X12-lstm model is an effective improvement on simple LSTM model and is more suitable for data prediction with seasonality.
Agricultural Product Quality Classification Based on GA-SVM
MA Chuang, LV Xiao-fei and LIANG yan-ming
Computer Science. 2020, 47 (6A): 517-520.  doi:10.11896/JsJkx.190900184
Abstract PDF(2095KB) ( 676 )   
References | Related Articles | Metrics
Traditional methods classify agricultural products by fined-grained level and determine the key factors affecting the classification effect,but ignore the quality characteristics of agricultural products.Scientific classification of agricultural products quality can not only effectively improve the speed of subsequent processing of agricultural products,but also better reflect the changes in the quality of agricultural products.Starting from the quality characteristics of agricultural products,agricultural pro-ducts are classified,and different types of agricultural products are processed in different methods,so as to ensure the quality of agricultural products and increase their added values.The classification method and the selection of model parameters are especially important for the accuracy of agricultural product quality classification.Traditional support vector machine (SVM) has blindness in the selection of model parameters.In order to improve the classification accuracy of agricultural product quality,a product quality classification model combining factor analysis (FA) and improved support vector machine (GA-SVM) is proposed.Experimental results show that the improved SVM can quickly and effectively identify the quality categories of agricultural products,significantly improve the classification accuracy of agricultural product quality.The evaluation process is relatively simple and can be widely applied to the evaluation of agricultural product quality.
Research on Agricultural Products Recommendation Technology Based on User Interest
LI Jian-Jun, FU Jia, YANG Yu, HOU Yue, WANG Xiao-ling and RONG Xin
Computer Science. 2020, 47 (6A): 521-525.  doi:10.11896/JsJkx.190900131
Abstract PDF(1959KB) ( 896 )   
References | Related Articles | Metrics
At present,the development of the Internet is becoming more and more powerful,and the competition in the e-commerce market of agricultural products is increasingly fierce.Users cannot find products suitable for themselves from a large amount of product information.The traditional collaborative filtering algorithm only pays attention to user ratings,and cannot reflect the changes of users’ interests in time.In view of this problem,a user interest recommendation algorithm based on improved weight was proposed by considering user behavior,user access time and frequency.Experimental results show that compared with the traditional recommendation algorithm,the proposed algorithm WUI-CF can better mine user interest,adapt to user’s interest changes,improve the accuracy of recommendation,and better solve the problem that users have no choice facing with numerous agricultural products information,and improve user satisfaction.
Core in Covering Approximation Space and Its Properties
ZHOU Jun-li, GUAN Yan-yong, XU Fa-sheng and WANG Hong-kai
Computer Science. 2020, 47 (6A): 526-529.  doi:10.11896/JsJkx.190600003
Abstract PDF(1713KB) ( 596 )   
References | Related Articles | Metrics
A new concept of core in the covering approximation space is proposed,and the existence and uniqueness of the core and the relationships between covering blocks,neighborhood and the core are studied.Based on the core and reduction,the concept of consistent covering is proposed,and the relationships between the reduction,the core and the consistent covering were revealed.Finally,the necessary and sufficient condition for neighborhood family derived from the covering to be equal to the cove-ring itself is given.
Research on Mobile Game Industry Development in China Based on Text Mining and Decision Tree Analysis
ZHU Di-chen, XIA Huan, YANG Xiu-zhang, YU Xiao-min, ZHANG Ya-cheng and WU Shuai
Computer Science. 2020, 47 (6A): 530-534.  doi:10.11896/JsJkx.190700124
Abstract PDF(2502KB) ( 1286 )   
References | Related Articles | Metrics
In view of the problems of inaccurate theme identification,lack of using data mining and visualization analysis method in the development of traditional mobile game industry in China,this paper proposes a research method based on text mining and decision tree analysis for the development of China’s mobile game industry.It analyzes the factors that influence the revenue and popularity of mobile game from many aspects,evaluates the characteristics of the industry from multiple perspectives,and studies the relationship between its revenue and the degree of visualization,the types of games,cultural background and internationalization index.In this paper,a detailed experiment is conducted with Python language,and the relationship between the developer and the local science and technology innovation index is analyzed,so as to dig out the mobile game with high intelligent recommendation popularity and playability.The experimental results show that the proposed algorithm has certain theoretical significance and research value,and can be applied to fields of mobile game market analysis,mobile game evaluationrecommendation and so on.Meanwhile,it can optimize the mobile game industry market in China and promote its development.
Agricultural Product Output Forecasting Method Based on Grey-Markov Model
MA Chuang, YUAN Ye and YOU Hai-sheng
Computer Science. 2020, 47 (6A): 535-539.  doi:10.11896/JsJkx.190700126
Abstract PDF(1692KB) ( 1713 )   
References | Related Articles | Metrics
Grain plays an important role in agricultural products.Grain output determines the country’s grain supply capacity and the level of food and clothing security to a certain extent.Therefore,it is of great value to study the accurate prediction of grain output.In view of the fact that grain output is highly volatile and random due to various complex factors,in order to improve the accuracy of grain output prediction,a model based on the fusion of gray model and Markovmodel is proposed for the characteristics of grain output in China,Markov model is used to modify the forecast value of the gray model to achieve periodic forecast of grain output.Through the selection of my country’s annual grain output data from 2009 to 2018 (data source: National Bureau of Statistics) for analysis and research.The method first uses the gray model to predict output,calculate the forecast error,and uses gray modeling to correct the forecasted output data by using the gray model for the error sequence; second,the annual grain output data is divided into several states through the accuracy of the annual grain output forecast,and then the Find the state transition probabilities and state transition probability matrices of each order; finally,predict the annual grain output by establishing a gray model after the metabolism of Xincheng to obtain the prediction results,and use the Markov model to modify the residual values of the prediction results to achieve improved grain Accuracy of yield forecast.Through simulation experiments,the prediction accuracy of the single gray model and the gray Markov model are compared.The forecast value of the gray model is less than 1.00% in the forecast of annual output from 2009 to 2013.However,as the year increases,the forecast accuracy is deteriorated due to the interaction between the annual grain output.Both are higher than 1.00%.The gray-Markov model’s annual output prediction error is less than 0.30%,and the average error is 0.12%.Compared with the traditional gray model and Markov mo-del,the accuracy of prediction is greatly improved.
Recommendation Algorithm Based on Convolutional Neural Network and Constrained Probability Matrix Factorization
MA Hai-Jiang
Computer Science. 2020, 47 (6A): 540-545.  doi:10.11896/JsJkx.191000172
Abstract PDF(2361KB) ( 993 )   
References | Related Articles | Metrics
Due to the sparsity of user rating data and the lack of context information,the recommendation algorithm based on matrix factorization is often lacking in accuracy.To solve this problem,a recommendation algorithm based on convolutional neural network and constrained probability matrix factorization is proposed.Firstly,a convolutional neural network model is constructed to identify the contextual auxiliary information of users,obtain the text potential vector,superimpose gaussian noise,and initialize the proJect characteristic matrix.Then,according to the user rating information,the user characteristics are constrained by the constraint matrix,and the user characteristic matrix is initialized by superimposing the compensation matrix.Then,the initialized user characteristic matrix and proJect characteristic matrix are used to fit the rating matrix,the rating matrix is decomposed by matrix,and the coordinate descent algorithm is used to update the parameters.Finally,predict the user’s score on the proJect and implement the proJect recommendation.Experimental results on Movielens and Amazon data sets show that this recommendation algorithm is significantly superior to the traditional recommendation model and effectively improves the accuracy of recommendation results.
Interdiscipline & Application
Design and Performance Analysis of Automotive Supply Chain System Based on Hyperledger Fabric
LIN Xu-dan, BAO Shi-Jian, ZHAO Li-xin and ZHAO Chen-lin
Computer Science. 2020, 47 (6A): 546-551.  doi:10.11896/JsJkx.190700022
Abstract PDF(3121KB) ( 1312 )   
References | Related Articles | Metrics
Hitherto,a centralized management mode is prevalent in Automobile Supply Chain System (ASCS).The difficulty in data exchange and the information asymmetry between enterprises,nevertheless,are leading to a low efficiency in such system.Furthermore,critical question of trust is also raised due to the opacity of information.For these reasons,inspired by the emerging blockchain technology,a novel distributed blockchain-based ASCS with Hyperledger Fabric as a development framework is designed to provide secure and trusted transaction services for mutli-party enterprises.The proposed system is provided with a series of advantages,such as access control,Data transparency,traceable and non-modifiable.In addition,a multichannel architecture is devised to realize the privacy isolation in enterprise collaboration.In this paper,firstly,the experimental environment is constructed by utilizing the docker technology,and then the functional interface is tested.Finally,the feasibility of the proposed system is verified by analyzing the throughput performance.This paper introduces the model of “blockchain+”,which provides a new idea for the upgrading and transformation of traditional automobile supply chain.
Single Departure and Arrival Procedure Optimization in Airport Terminal Area Based on Branch and Bound Method
ZHOU Jun and WANG Tian-qi
Computer Science. 2020, 47 (6A): 552-555.  doi:10.11896/JsJkx.190600018
Abstract PDF(3898KB) ( 688 )   
References | Related Articles | Metrics
Currently,the maJority of airport departure and arrival procedures are designed manually and drawn with the help of computer-aided software.There is still room for improvement in bringing airspace resources into full play.In order to provide effective decision support for actual procedure designers,an optimization methodology of single departure and arrival programming is proposed.Firstly,each route is modeled in three dimensions in compliance with the Required Navigation Performance (RNP),and flight restrictions such as obstacle avoidance are considered.Secondly,three different ways of obstacle avoidance aree proposed:bypassing clockwise or counter-clockwise along the obstacle boundary,or maintaining the current flight level below the obstacle.Then,a Branch and Bound (B&B) approach is developed,where the branching strategies corresponded to different ways of obstacle avoidance.Finally,the algorithm is tested for two different obstacle layout structures,and the computation time is compared with the A* algorithm.The results show that,the algorithm can provide optimal routes avoiding obstacles and conforming to the RNP requirements in a short computing time.Moreover,by adJusting different weight coefficients in the obJective function,the procedure of continuous climbing or descending can be obtained,which has positive impact for aircraft noise and emission reduction.
Format Mining Method of Variable-length Domain in Private Binary Protocol
XU Xu-dong, ZHANG Zhi-xiang and ZHANG Xian
Computer Science. 2020, 47 (6A): 556-560.  doi:10.11896/JsJkx.190900035
Abstract PDF(2035KB) ( 721 )   
References | Related Articles | Metrics
Protocol reverse engineering is one of the important steps in fuzzy test field.Aiming at the problem that there is no good systematic method for the format mining of variable-length domain and the mining of keyword domain boundary of variable-length domain is not ideal in the private binary protocol,a method to deal with the length domain and keyword domain separately in variable-length domain is proposed.For the length domain,using the results of progressive multi-sequence alignment,the global length domain and the local length domain are respectively mined by using the iterative window mining method,and test on the data set constructed by SNMP protocol shows it has a good boundary mining effect.For the keyword domain,in view of the problem that the former boundary of the keyword domain cannot be mined with the existing methods,by improving the voting expert algorithm,and adding the reverse search tree,the front the back boundaries of the keyword domain can be mined at the same time.Test on the data set constructed by ICMP and HTTP protocol show that,there is greatl improvement compared with the traditional voting expert algorithm.
Node Fusion Optimization Method Based on LLVM Compiler
HU Hao, SHEN Li, ZHOU Qing-lei and GONG Ling-qin
Computer Science. 2020, 47 (6A): 561-566.  doi:10.11896/JsJkx.191100017
Abstract PDF(2807KB) ( 2294 )   
References | Related Articles | Metrics
LLVM is a framework system of architecture compiler written in C++,which is a cross-compiler that supports multiple back-ends.It can optimize program compilation time,link time,run time,and idle time.Node fusion is a simple and effective optimization method.The basic idea is to optimize multiple nodes into an efficient fusion node.This optimization can reduce overhead such as instructions,registers,clock cycles,memory access,so as to reduce program running time and improve memory access efficiency.In order to improve the performance of the LLVM compiler,node fusion optimization algorithm is proposed for the LLVM compiler in the intermediate presentation phase,DAG combine phase and instruction selection phase.Under the domestic platform Sunway processor,with CLANG and FLANG as the front end and LLVM as the back end of the compiler,LLVM is evaluated based on the SPEC CPU2006 test set.The results show that node fusion optimization is beneficial to improve compiler performance and reduce program running time.The optimized maximum speedup ratio is 1.59 and the average speedup ratio is 1.13.
Research on Method of Credibility Evaluation of System Simulation
GUO Cong-rui, WANG Jun and FENG Yi-ming
Computer Science. 2020, 47 (6A): 567-571.  doi:10.11896/JsJkx.190700201
Abstract PDF(2820KB) ( 1163 )   
References | Related Articles | Metrics
Based on the simulation model verification and validation standards and guidelines from overseas,this paper proposed a general process and method for system simulation model credibility evaluation.This paper describes the corresponding concepts and lists the main steps in model evaluation.This proposed evaluation method employs the feature selective validation with uncertainty.In order to illustrate the feasibility and validity of the proposed evaluation method,the paper introduces an example of pressure fluctuation simulation model credibility evaluation in waterpipe.
Research on Organizational Interoperability Modeling and Evaluation Based on Graph Theory
GAO Lin, DUAN Guo-lin and YAO Tao
Computer Science. 2020, 47 (6A): 572-576.  doi:10.11896/JsJkx.190900114
Abstract PDF(3588KB) ( 796 )   
References | Related Articles | Metrics
To solve the problems of organization interoperability modeling and evaluation,the related research contents of graph theory application and interoperability evaluation abroad are analyzed.This paper briefly introducthe origins of graph theory and three interoperable aspects of enterprise interoperability.Based on the business process,the interoperability model is modeled,and an improved method based on graph theory was proposed.An organization interoperability rule based on graph theory is proposed.An evaluation mechanism based on the interoperability of graph theory,enterprise modeling and rules is constructed,which expands the mind for the interoperability evaluation of organizations.
Formation Containment Control of Multi-UAV System Under Switching Topology
ZHAO Xue-yuan, ZHOU Shao-lei, WANG Shuai-lei and YAN Shi
Computer Science. 2020, 47 (6A): 577-582.  doi:10.11896/JsJkx.190700064
Abstract PDF(2500KB) ( 1235 )   
References | Related Articles | Metrics
To solve the formation containment control problem of multi-UAV system under switching topology,a distributed controller based on consistency algorithm was designed.The formation control problem of leader was transformed into consensus problem by variable substitution.Then,by the special decomposition of Laplacian matrix,the consensus problem was simplified to the asymptotic stability problem of low-order systems.Through the Laplacian matrix property with multi-leaders,the follower’s containment control problem was transformed into an asymptotic stability problem.The concept of average dwell time of switching topology was given.Combining linear matrix inequalities and Lyapunov functions,the design steps of the consensus controller were given.It is also proved that the multi-UAV system with switching topology can achieve formation containment state flight under the action of the designed controller.Simulation results of multi-UAV system in three-dimensional space show that the designed consensus controller solves the problem of formation containment control of multi-UAV system under switching topology.
Alternative Online Arbitration System for Dispute
ZHOU Wei and LUO Xu-dong
Computer Science. 2020, 47 (6A): 583-590.  doi:10.11896/JsJkx.190900140
Abstract PDF(2736KB) ( 1090 )   
References | Related Articles | Metrics
In the recent years,Internet arbitration has always been playing an important role in legal disputes resolution in the field of digital economy,aiming at online dispute settling online.However,the existing arbitration systems do not conform to high standard of legal procedures for protecting legal rights of parties.To address the issue,this paper proposes an alternative online arbitration system for disputes by modeling online and offline arbitration procedures and real arbitration functions,which is equipped with Software-as-a-Service (SaaS) technical architecture.The arbitration system integrates artificial intelligence and block chain technologies.Then system is tested our dispute online arbitration system in China Maritime Arbitration Commission (CAMC).The results show that the arbitration credibility has improved significantly and the reengineering of arbitration process based on arbitration value chain has been realized.
Application Research of Blockchain Technology in Trust Industry
KE Yu-Jing, JING Mao-hua and ZHENG Han-yin
Computer Science. 2020, 47 (6A): 591-595.  doi:10.11896/JsJkx.190900055
Abstract PDF(1852KB) ( 857 )   
References | Related Articles | Metrics
Due to its highly centralized mode,the existing trust platform has many problems and security risks,such as opaque transactions and easy to be to attacksed,and cannot match the rapid development of current trust industry.Blockchain has the characteristics of decentralization,openness,independence,security and anonymity,and can well solve the problems faced by the trust industry.Based on the blockchain technology,a dual-chain architecture model was proposed,and a dual-chain trust business underlying platform was designed and implemented based on it.On the one hand,the platform adopts the interactive double-chain design model of a relational database and blockchain information,to realize strict control of information permissions and enhance risk management;on the other hand,the platform uses a dual-chain interaction design mode containing an alliance chain and a private chain to achieve the establishment of trust business model.On this basis,the trust building,trust application chain functional modules,and application chain-based application interface APIs are designed and implemented.Finally,the advantages and challenges of the application of blockchain technology in trust business are analyzed and summarized.
Low Power Long Distance Marine Environment Monitoring System Based on 6LoWPAN
WANG Dong, WANG Hu and JIANG Qian-li
Computer Science. 2020, 47 (6A): 596-598.  doi:10.11896/JsJkx.190900194
Abstract PDF(4421KB) ( 834 )   
References | Related Articles | Metrics
Marine environmental monitoring is characterized with decentralized monitoring nodes,a large quantity of nodes,complicated of measurement data types,variety of information exchange and communication.Wireless sensor networks can reduce the number of cable connections,and decrease the costs of deployment and maintenance.Based on IEEE802.15.4,6LoWPAN technology realizes the transmission of IPV6 data packets in wireless sensor networks,hence it is an ideal technology for the interconnection between wireless sensor network and Internet.Based on the research of the topology and protocol of Contiki 6LoWPAN network,the TI CC1310 platform is used to build the wireless sensor nodes and edge routers.The node data is sent to the monitoring system on severs through the edge routers and Internet,to achieve the dynamic monitoring of ocean data.Experiments show that the system has the advantages of easy network construction,long transmission distance,and low cost and power consumption.
Research on Relocation of Substation Inspection Robot
LI Zhong-fa, YANG Guang, MA Lei and SUN Yong-kui
Computer Science. 2020, 47 (6A): 599-602.  doi:10.11896/JsJkx.190500018
Abstract PDF(2351KB) ( 1085 )   
References | Related Articles | Metrics
The environment of substation is complex,and the manual inspection is labour-intensive and inefficiency.The hardware framework of the inspection robot is studied,and the positioning research of the inspection robot is completed based on the Adaptive Monte Carlo Localization(AMCL).This paper provides the corresponding solution in regard to the deficiency of Adaptive Monte Carlo Localization (AMCL) in the practical application of engineering.Considering AMCL can’t restore the location rapidly,a new method of relocation based on database has been put forward,which uses the database to store location values.When the location mismatches,the location value stored in database is used for initializing particles,so as to realize rapid restoration of location.Experiments have proven that compared to the primary AMCL algorithm,the improved AMCL algorithm is more competent in restoration of location after the loss of location.
Design and Analysis of Token Model Based on Blockchain Technology
WU Guang-fu, CHEN Ying, ZENG Xian-wen, HE Dao-Jing and LI Jiang-hua
Computer Science. 2020, 47 (6A): 603-608.  doi:10.11896/JsJkx.190800155
Abstract PDF(2073KB) ( 1837 )   
References | Related Articles | Metrics
Through in-depth study of the traditional general evidence model,it is found that the centralized mode has always been restricting the development of the system,and the emergence of blockchain technology undoubtedly provides an entry point for the application and promotion of the certificate,which will break the information barrier between enterprises.Blockchain techno-logy is an Internet database technology,where each user has an equal right to compete for writing the database records.In this paper,a pass-through chain based on blockchain technology was designed.The pass-through chain has the characteristics of decentralization and non-tamperability.It can break the information barrier between enterprises and combine the use of the pass on the pass-through chain to increase the trust between enterprises.To enhance the information flow between enterprises,the pass-through chain proposed a safer and more efficient consensus algorithm,that is,the pass-through consensus algorithm,so that the pass-through chain has better efficiency and performance compared with the traditional public chain such as Bitcoin andEthe-reum.Using pluggable technology to realize the pluggable applications of cryptography and database will make the blockchain more efficient and convenient in the development of different application scenarios.
Intelligent Video Surveillance Systems Based on FPGA
ZHAO Bo, YANG Ming, TANG Zhi-wei and CAI Yu-xin
Computer Science. 2020, 47 (6A): 609-611.  doi:10.11896/JsJkx.190700118
Abstract PDF(4306KB) ( 705 )   
References | Related Articles | Metrics
According to the video monitoring new requirements,this paper designed a set of intelligent video surveillance and retrieval system based on FPGA.The system can perform video preprocessing,face detection,intelligent background removal and video structural description simultaneously,and realize intelligent video acceleration retrieval relies on hardware acceleration,so that to achieve rapid analysis and processing of surveillance video,and quickly get the ideal processing results.By FPGA integration (ARM9) CPU,DDR,video acquisition module,a variety of peripherals (UART,GPIO,etc.),the system can realize the detection and analysis of real-time video.For videos with a resolution of 1280*720,280MHz synchronous clock is adopted,and the system frame rate is about 6.6fps.When these Intellectual Property cores are integrated into the SOC chip,the processing speed can reach 30fps.
Stock Investment Strategy Development Based on BigQuant Platform
LI Yong
Computer Science. 2020, 47 (6A): 612-615.  doi:10.11896/JsJkx190600007
Abstract PDF(3147KB) ( 1301 )   
References | Related Articles | Metrics
Based on BigQuant platform,a stock investment consulting system,using its StockRanker algorithm and back-test mechanism,this paper analyzed the characteristic data of 1848 stocks in China’s stock market after deducting CSI 300 index component stocks from all A shares with normal trading during the sample period from January 1,2010 to February 5,2019,and ranked the stocks with the greatest investment value so as to provide intelligent and personalized asset allocation proposal for investors with different risk preferences.Based on the CSI Smallcap 500 index,through strategic Judgment,this study developed product D with better and more stable investment returns than the standard index fund by substituting the poor performance component stocks with excellent performance non-component stocks,which has shown.
Emergency Plan Evaluation of Special Equipment Accident Based on Intuitionistic Fuzzy Analytic Hierarchy Process
ZHENG Geng-feng
Computer Science. 2020, 47 (6A): 616-621.  doi:10.11896/JsJkx.190600097
Abstract PDF(1990KB) ( 675 )   
References | Related Articles | Metrics
In view of the problems that most of the existing emergency plans for special equipment accidents have not been proved by practice and lack of scientific and reasonable evaluation system,this paper proposed an evaluation method of emergency plan for special equipment based on intuitionistic fuzzy AHP.Firstly,the performance evaluation index system of the emergency plan is constructed from three aspects:preparation before implementation,execution in implementation and improvement after implementation.Secondly,considering the uncertainty in the emergency process and the subJectivity of the evaluation process,the weights of each index are calculated by intuitionistic fuzzy AHP,and the performance of each index is determined by combining the scores of the expert group on the emergency simulation.Finally,the feasibility and effectiveness of this method are proved by comparing with the results of artificial evaluation.
Accelerated Software System Based on Embedded Multicore DSP
CAI Yu-xin, TANG Zhi-wei, ZHAO Bo, YANG Ming and WU Yu-fei
Computer Science. 2020, 47 (6A): 622-625.  doi:10.11896/JsJkx.190400079
Abstract PDF(2613KB) ( 729 )   
References | Related Articles | Metrics
In recent years,with the rapid development of intelligent video surveillance,the processing video data generated by various video capture devices has become an important task in the public security industry.At present,most video data processing adopts the back-end server mode,which has a high requirement for bandwidth of video transmission and has problems such as insufficient back-end server resources.For this reason,this paper proposes to use the embedded multi-core acceleration board instead of the server to complete part of the task,and various image processing algorithms that consume server resources are stripped from the backend server,and they are put into the front-end embedded acceleration board to calculate,to a certain extent,it can save server resources and improve server efficiency.Finally,the scheme is tested in this paper.The target multi-core acceleration board is used to detect the target with a resolution of at least 2million pixels.The test result shows that the average processing power of each image is not less than 200ms,and it can process more than 1.3million pictures within 24hours.It can be seen that it is feasible to use a multi-core embedded board instead of a server to complete the image processing solution.
Computing Ability of Spiking Neural P System Based on Rough Rules
LUO Yun-fang, TANG Cheng-e and WEI Jun
Computer Science. 2020, 47 (6A): 626-630.  doi:10.11896/JsJkx.190500120
Abstract PDF(3293KB) ( 656 )   
References | Related Articles | Metrics
Spiking neural P system inspired neurons cooperation processing pulse process in biological systems and proposed new calculation model.In order to further reflect the randomness of biological system,this paper proposed a new neuronal activation system,rough rule based spiking neural P system,which uses the concept of upper and lower approximations to establish the activation conditions of neurons.Then,the computing completeness of the improved spiking neural P system was proved.Finally,the ability of the system to generate automatic language was studied to illustrate its computing ability.The result shows that the improved spiking neural P system has strong computing ability.
Fusion Localization Algorithm of Visual Aided BDS Mobile Robot Based on 5G
MA Hong
Computer Science. 2020, 47 (6A): 631-633.  doi:10.11896/JsJkx.190400156
Abstract PDF(3094KB) ( 1222 )   
References | Related Articles | Metrics
This paper presents an innovative method to estimate the position of mobile robot with 5G “broadband cloud information” visual image processing aided by BDS,so as to eliminate errors to improve accuracy.By improving the Pyramid LK algorithm to estimate the optical flow velocity,the mobile robot speed can be accurately obtained,and the acceleration value is providedby the mobile phone acceleration sensor,and the three-dimensional position information of the mobile robot can be roughly provided by the Beidou receiver.The improved Kalman filter is used for data fusion.The improved path is first supervised by the wavelet neural network.Then the improved gradient descent method is used to study and train the weights and parameters of the wavelet neural network.Finally,the combination algorithm of PSO and GA is further used to correct the weights and thresholds of the wavelet neural network with a view to further improve the performance of Kalman filter and highlight the advantages of the cumulative error of the robot visual positioning method corrected by BDS.It improves the accuracy and reliability of integrated navigation and positioning in special harsh environment,and has important reference value for the in-depth research of BDS and 5G technology in the field of mobile robots.
Design of Network Multi-server SIP Information Encryption System Based on Block Chain and Artificial Intelligence
REN Yi
Computer Science. 2020, 47 (6A): 634-638.  doi:10.11896/JsJkx.190600075
Abstract PDF(2123KB) ( 638 )   
References | Related Articles | Metrics
The traditional encryption system can only realize single information sharing and can not guarantee the security of multiple information sharing.In order to solve this problem,network multi-server SIP information encryption system based on block chain and artificial intelligence was put forward and designed.Under the condition of network multi-server SIP,USB module is designed and a state register static switch is installed in the module to Judge the storage state of USB information,build information interface in the USB module,and use A/D converter for signal conversion,to provide basic information transmission signals for task dispatching.Add dispatching rules,divide the dispatched information into several data blocks.According to the lock box idea,use the function call function set to check the access rights,generate the authentication secret key,design the encryption execution module,according to the time sequence,combine the decomposed information into chain data structure in a certain order,to get the initial fingerprint by calculating the initial data matrix,specifically and secretly execute scheme according to the fingerprint design,and realize the information encryption through block chain and artificial intelligence technology.Experimental comparison results show that the system has high information transmission integrity and good encryption effect,and the reading and writing efficiency of the system is always maintained at 90% and above,which increases the reading and writing efficiency of information.
Building Innovative Enterprise Customer Service Technology Platform Based on Blockchain
ZHANG Qi-ming, LU Jian-hua, LI Shou-zhi and XU Jian-dong
Computer Science. 2020, 47 (6A): 639-642.  doi:10.11896/JsJkx.191200118
Abstract PDF(2732KB) ( 1184 )   
References | Related Articles | Metrics
The traditional customer service management system is difficult to establish a convenient and reliable data sharing channel between the participants,and it is impossible to achieve the deep integration and sharing of information.This article first explains the technical rationality of building a customer service platform based on the blockchain,and then proposes a technical architecture for building aninnovative customer service platform based on the permission chain.This architecture integrates the alliance chain and the private chain to establish the trusted data sharing link among the enterprise and customers,and various departments of the enterprise.The platform uses the KV-R conversion engine to achieve the interconnection between the blockchain database and the relational database.This technology framework will be applied to the construction of a new generation of enterprise customer service technology platform.
Design of ETC System Based on Microservice Architecture
YU Man, HUANG Kai and ZHANG Xiang
Computer Science. 2020, 47 (6A): 643-647.  doi:10.11896/JsJkx.190800010
Abstract PDF(2030KB) ( 1453 )   
References | Related Articles | Metrics
With the development of information technology,ETC technology has been widely used in charging areas such as highways and urban congestion areas.The ETC system becomes increasingly huge and complex with the rapid expansion of business functionsand the increasing volume of users and transactions.In view of the problems in design and maintenance of the system,this paper proposed to upgrade the existing ETC system in BeiJing based on the microservice architecture.The architecture design and key technologies of two important components,data platform and business platform,after system reconstruction are introduced in details.Finally,the system conforms to the development principle of lightweight,loose coupling and high scalability.The full-automatic independent deployment and the operation & maintenance of hot update are realized,and the bottleneck problems encountered in practical applications are also solved.
Embedded Device’s IAP Solution Based on Mail-update
CHEN Yun
Computer Science. 2020, 47 (6A): 648-651.  doi:10.11896/JsJkx.191000052
Abstract PDF(108606KB) ( 738 )   
References | Related Articles | Metrics
The electronic control actuator is generally located at the end of the automatic control system and is widely used in agricultural or industrial production.This kind of equipment does not include HMI or remote communication modules,but also needs scene adaptation,parameter tune,Firmware IAP and so on.This paper studies a Mail-Update scheme for embedded devices,which update Firmware from the HMI of the automatic control system.Taking the Electrical Wheel upgrade of the agricultural automatic driving system as an example,this paper introduces the Mail-Update solution for embedded devices,including four units:Bootloader development,Firmware file generation,Communication Adapter Module and HMI tool development.The testing result shows that the Mail-Update solution can meet the upgrade requirement of this kind of end embedded device.