Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 47 Issue 5, 15 May 2020
Computer Science. 2020, 47 (5): 0-0. 
Abstract PDF(274KB) ( 211 )   
RelatedCitation | Metrics
Theoretical Computer Science
Deeper Explanation of Quantum Logic in Intuitionistic Perspective
ZHOU Heng, WANG Yong-jun, WANG Bao-shan, YAN Jian
Computer Science. 2020, 47 (5): 1-6.  doi:10.11896/jsjkx.191200056
Abstract PDF(1562KB) ( 392 )   
References | Related Articles | Metrics
Quantum computer is becoming one of ongoing research direction of computer science.Quantum logic is the mathemati-cal foundation of quantum computation and quantum information.Von Neumann represented properties of quantum physical systems by closed subspaces of Hilbert space,thus constituting orthomodular lattice.Elements of orthomodular lattice own definite physical understanding but lack of the ability to describe superposition.Therefore,Bob Coecke constructed propositional lattice with Heyting algebra by adding disjunction elements for superposition.Elements of propositional lattice own definite mathematical meaning but lack of physical understanding.For latter,this paper gives a deeper explanation about physical understanding of elements of propositional lattice.As our viewpoint,the added disjunction elements represent “observer perspective”,which is required while depicting superposition in propositional lattice.Thus,by applying quantum logic on measurement operation,all elements of propositional lattice are given definite physical understanding and provide theoretical basis for quantum teleportation and action at distance.
Survey on Online Influence Maximization
KONG Fang, LI Qi-zhi, LI Shuai
Computer Science. 2020, 47 (5): 7-13.  doi:10.11896/jsjkx.200200071
Abstract PDF(1591KB) ( 343 )   
References | Related Articles | Metrics
Influence maximization is selecting seed nodes under a given influence propagation model to maximize the information spread.This problem has a wide range of application scenarios,including recommendation systems,viral marketing,information diffusion and link prediction.In practical applications,the node-to-node propagation probabilities in an information propagation model are usually unknown.Besides,online learning algorithms can automatically learn unknown parameters during the interaction process and gradually approach the optimal solution.The paper first discusses the definition of influence maximization problem,introduces commonly used influence propagation models,and summarizes the common offline influence maximization algorithms.Then it introduces the classic online learning framework,the multi-armed bandit setting,analyzes the research status of online influence maximization problem,and compares the performance of common online influence maximization algorithms in real social networks through experiments.Finally,the challenges and research directions of this subject in the future is prospected.
On Hardness of Approximation for Optimized Problem of Synchronizing Partially Specified Deterministic Finite Automata
ZHU Kai, WU Guo-qing, YUAN Meng-ting
Computer Science. 2020, 47 (5): 14-21.  doi:10.11896/jsjkx.200200073
Abstract PDF(1902KB) ( 176 )   
References | Related Articles | Metrics
An automaton is synchronizing means it is with a word that has the following property:no matter which state the automaton is in,it will always reach a certain state after executing with a synchronizing word as input.For the problem of synchronization of automata,the core is to compute the shortest synchronizing word.Focusing on this core problem,this paper studies the complexity of approximate computing,i.e.the hardness of approximating,the shortest synchronizing word for a class of automata called partially specified deterministic finite automata,which is helpful for the analysis and design of its approximate algorithm.By two reductions from MAX SAT and MAX FA-INT,which are two optimizing problems,to the problem of computing the length of the shortest synchronizing word (i.e.,Shortest-Syn),respectively,and by some existing results related to Probabilistically Checkable Proofs theorem and Probabilistically Checkable Debate theorem,the main results are proved.The results are as follows:for the partially specified deterministic finite automata,approximate computing the problem of Shortest-Syn is NP-hard and PSPACE-hard within some certain approximate ratios,unless NP and PSPACE collapse to P,respectively.
Approximability of Partition Functions of Ferromagnetic Two-state Spin Systems
QIU Guo-liang, ZHANG Chi-hao
Computer Science. 2020, 47 (5): 22-26.  doi:10.11896/jsjkx.200200119
Abstract PDF(1445KB) ( 106 )   
References | Related Articles | Metrics
The two-spin system is a simplified model for the interaction of the multiparticle system in statistical physics.Computing the partition function of the system is of great significance in both statistical physics and computer science.It is well-known that the exact computation of the partition function is #P-hard for general systems.Therefore,the approximability of the partition function has attracted a lot of attention of computer scientists.Notable progress has been made along this line of research in recent years.Close connections between the approximability of the partition function and the phase transition of the physical mo-dels have been revealed.Based on these connections,approximation algorithms have been discovered for the model in a wide range of parameters.This paper reviews the research on the approximability of the partition functions of ferromagnetic 2-spin systems,introduces the main ideas of three classes of algorithms for this problem,including MCMC based algorithms,decay of correlation based algorithm and recent polynomial interpolation based algorithms.Moreover,the results of these algorithms are compared with the current inapproximability results.
On Logic of Graded Argumentation System
TAN Li-xing, WANG Fu-jun
Computer Science. 2020, 47 (5): 27-31.  doi:10.11896/jsjkx.200200052
Abstract PDF(1464KB) ( 55 )   
References | Related Articles | Metrics
In recent years,formal argumentation has gradually become one of the research hotspots in the field of artificial intelligence.Since Dung put forward the abstract argumentation framework in 1995,it is generally accepted in the academic circles that the core task of argumentation is to evaluate a set of arguments under various extension-based semantics,so as to determine their justification status.Graded argumentation system(GAS) is a generalization of the classical Dung-style Argumentation System (DAS),which provides a more fine-grained notion of argument status by generalizing the two core principles of DAS semantics,conflict-free and acceptability.The current research on semantic equivalence of argumentation system mainly focuses on the framework and argument level,which can provide powerful support for its structural reduction.For the semantic equivalence of the arguments in two different graded argumentation systems,firstly,fragments of the graded argumentation system are formalized by using Graded Modal Logic (GML).Then,the one-to-one correspondence between the extension-based semantics and GML formula in graded argumentation system is established and proved.Finally,the graded bisimulation relation is defined and that it implies four prominent semantic equivalences of GAS is proved.
New Algebraic Logic Reduction Method for Boolean Formula
LIU Jiang, ZHOU Hong-hao
Computer Science. 2020, 47 (5): 32-37.  doi:10.11896/jsjkx.190400018
Abstract PDF(1353KB) ( 89 )   
References | Related Articles | Metrics
Boolean satisfiability problem is one of the earliest proven NP complete problem.1-in-3-SAT problem is an NP complete subclass of Boolean satisfiability problem.The computational complexity of 1-in-3-SAT depends on the number of the variables and clauses in the formula.How to reduce the 1-in-3 formula to one with less variables or clauses is the key to improve the efficiency of solving 1-in-3-SAT.Based on a new type of normal form-XCNF,this paper proposes a new algebraic logic reduction method to reduce the number of variables and clauses in polynomial time.The main idea is as follows.First,the method transforms the 1-in-3 formula into a XCNF formula,then tries to find out the X pure literal in the XCNF formula and assign the corresponding Boolean variable in the 1-in-3 formula with X pure literal rule.At last,a reduced formula which has the same 1-in-3 satisfiability with the original one can be obtained.
Uniform Solution to QAST Problem by Communication P Systems with MembraneDivision and Promoters
SONG Bo-sheng, CHENG Yu
Computer Science. 2020, 47 (5): 38-42.  doi:10.11896/jsjkx.191100204
Abstract PDF(1422KB) ( 65 )   
References | Related Articles | Metrics
Membrane computing is a branch of natural computing,and all the computing models investigated in membrane computing are called membrane systems.Communication in cells is a significant characteristic of membrane systems.Communication P systems with membrane division are distributed parallel computing models,which can solve hard computational problems in polynomial time.In this work,promoters are introduced into communication P systems with membrane division,and a variant of P systems,called communication P systems with membrane division and promoters is proposed,where any number of rules can be guided by a promoter in one step,and promoters do not participate the evolution process when the evolved rules are used.The computational efficiency of this kind of P systems is studied.This paper presents a uniform solution to the PSPACE-complete problem QSAT by using symport rules of length at most 2 and promoters of length at most 1 in a polynomial time.
Stability Analysis of Ontology Learning Algorithm in Decision Graph Setting
ZHU Lin-li, HUA Gang, GAO Wei
Computer Science. 2020, 47 (5): 43-50.  doi:10.11896/jsjkx.200100129
Abstract PDF(1448KB) ( 83 )   
References | Related Articles | Metrics
Traditional ontology algorithms use heuristic tricks to calculate semantic similarity.With the increasing amount of data processed by ontology,more and more machine learning technologies are applied to get ontology functions.Stability is a necessary condition for ontology learning algorithms which requires that there is no substantial influence on the obtained optimal ontology function if the ontology sample set is slightly changed.This paper studies the stability and corresponding statistical characteristics of ontology learning algorithms in the setting that the dependency relation of ontology samples are characterized by a graph.Firstly,the traditional PO and LTO uniform stability conditions are analyzed.Then,the extended uniform stability conditions Pk and LkO for large samples are proposed,and related theoretical results are obtained.Finally,two sample transformations (replacement ontology samples and delete ontology samples) are combined to bring forward the concept of combined uniform stability in setting of large ontology samples,and general results are yielded by using statistical learning theory.In addition,under various stability conditions,the generalized bounds of ontology learning algorithms that satisfy the m-independent condition are discussed.
Tree Decomposition Algorithm of Graph and Its Application
LEI Ying, XU Dao-yun
Computer Science. 2020, 47 (5): 51-58.  doi:10.11896/jsjkx.191100118
Abstract PDF(2079KB) ( 128 )   
References | Related Articles | Metrics
A tree decomposition of the graph G=(V,E) refers to taking a subset of the node set V as a node of the tree T,to make the intersection of the two endpoints on any path of T included in any node on the path.The number of elements of the mini-mum (node) corresponding subset on T minus 1 is defined as the width of the decomposition tree T,and the tree width of the graph G is defined by the tree width of the decomposition tree T with the smallest width.The CNF formula F can be represented by a bipartite graph G=(V∪C,E) (factor graph of the formula),where the variable node set V corresponds to the variable set,and the clause node set C corresponds to the clause set in the formula F.The positive (negative) occurrence of the element in the clause is represented by the real (virtual) side.In order to study the bipartite graph,the symbols on the edges of the formula factor graph are ignored.This paper studies the tree decomposition algorithm of graphs and applies the tree decomposition algorithm to the factor graph tree decomposition of CNF formula,aiming to explore the relationship between the tree width of the formula factor graph and the difficulty of the solution-finding through experiments.
Business Process Consistency Analysis of Petri Net Based on Probability and Time Factor
YANG Hao-ran, FANG Xian-wen
Computer Science. 2020, 47 (5): 59-63.  doi:10.11896/jsjkx.190500119
Abstract PDF(1692KB) ( 80 )   
References | Related Articles | Metrics
As one of the important part of business process management,business process consistency analysis has been a hot topic in business process management research in recent years.The existing methods mainly study from two aspects,control flow and data flow.Actually,probability and time factors have a major impact on business processes.As a result,this paper proposes a Petri net business process consistency analysis method based on probability and time factor.First,the definition of control flow Petri net with probability factor and data flow Petri net with time factor are proposed.Then,all the transitions of control flow Petri net with probability factor and data flow Petri net with time factor are respectively mapped to the original business process Petri net,and the respective behavior maps are obtained.Corresponding behavioral compatibility algorithms for the two types of Petri net are proposed,and the consistency degree of business process is measured by the value of behavioral compatibility.Finally,the effectiveness and superiority of the method are demonstrated by an example.
Databωe & Big Data & Data Science
Enhancer-Promoter Interaction Prediction Based on Multi-feature Fusion
HU Yu-jia, GAN Wei, ZHU Min
Computer Science. 2020, 47 (5): 64-71.  doi:10.11896/jsjkx.191100027
Abstract PDF(2893KB) ( 130 )   
References | Related Articles | Metrics
The study of the mechanism of Enhancer-Promoter Interaction is helpful to understand gene regulations,thus revealing specific genes that are relevant to diseases as well as providing new clinical methods and ideas for disease diagnosis and treatment.Compared to traditional biological analysis methods which are always more expensive,time-consuming and more difficult to precisely identify specific interactions due to limited resolution,computational methods to solve biological problems have become a hot research topic in recent years.This method can actively learn sequence features and spatial structures through complex network structures,so as to precisely and accurately predict the interactions of enhancers and promoters.This paper firstly introduces the research status of traditional biological detection methods.Then,from the perspective of sequence features,the application of statistics and deep learning method in the prediction of enhancer - promoter interaction is summarized and sorted out based on the basic idea of multi-feature fusion.Finally,the research hotspots and challenges in this field are summarized and analyzed.
Overlapping Community Detection Method Based on Rough Sets and Density Peaks
ZHANG Qin, CHEN Hong-mei, FENG Yun-fei
Computer Science. 2020, 47 (5): 72-78.  doi:10.11896/jsjkx.190400160
Abstract PDF(1728KB) ( 107 )   
References | Related Articles | Metrics
With the development of the Internet and society,a large number of interrelated and interdependent data is produced in various fields every day,which form various complex networks according to different themes.Mining community structure of complex networks is an important research content,which has extremely important significance in recommendation system,behavior prediction and information spreading.Moreover,overlapping community structure of complex networks exists universally in life,which has practical research significance.In order to detect overlapping communities effectively in complex networks,an overlapping community detection method OCDRD based on rough sets and density peaks is proposed in this paper,in which rough set theory is used to analyze communities and identify overlapping nodes.Firstly,the global similarities among network nodes are obtained by using grey correlation analysis method based on the traditional local similarity measure of network nodes.Then the global similarities among network nodes are converted to distance among nodes.The center nodes of the community are automatically selected by the network structure by applying the idea of density peaks based clustering.Next,the lower approximation,the upper approximation,and the boundary region of the community are defined according to the distance ratio relation among nodes in the network.Finally,the threshold value of distance ratio is adjusted iteratively,and the boundary region of the community is calculated repeatedly in each iteration until the optimal overlapping community structure is obtained.The OCDRD algorithm is compared with other community detection algorithms that have achieved good results in recent years both on LFR benchmark artificial network datasets and real network datasets.By analyzing two common community detection evaluation indexes,NMI and overlapping module degree EQ,the experimental results show that OCDRD algorithm is superior to other community detection algorithms in community partition structure andit is feasible and effective.
Stock Volatility Forecast Based on Financial Text Emotion
ZHAO Cheng, YE Yao-wei, YAO Ming-hai
Computer Science. 2020, 47 (5): 79-83.  doi:10.11896/jsjkx.190400145
Abstract PDF(4546KB) ( 210 )   
References | Related Articles | Metrics
Emotions in the stock market can reflect investor behavior to a certain extent and influence investors' investment decisions.As a kind of unstructured data,market news can reflect the advantages and disadvantages of the market environment,and become a vital market reference data with stock prices,which can provide effective help for investment decisions effectively.This paper proposes a multidimensional emotional feature vectorization method which can accurately and quickly establish a large amount of news data for massive news data.It uses the support victor machine (SVM) model to predict the impact of financial news on the stock market,and uses bootstrap to mitigate overfitting problems.The results on Shanghai and Shenzhen stock indexes show that compared with the traditional model,the proposed method can improve the prediction accuracy by about 8% and obtain an excess of 6.52% duringthree months,thus proving the effectiveness of the proposed method.
Short-term Traffic Flow Prediction Based on DCGRU-RF Model for Road Network
XIONG Ting, QI Yong, ZHANG Wei-bin
Computer Science. 2020, 47 (5): 84-89.  doi:10.11896/jsjkx.190100213
Abstract PDF(1790KB) ( 78 )   
References | Related Articles | Metrics
With the acceleration of urbanization,the number of motor vehicles in cities in China is increasing rapidly,which makes the existing road network capacity difficult to meet the transportation needs,traffic congestion,environmental pollution and traffic accidents are increasing day by day.Accurate and efficient traffic flow prediction,as the core of ITS,can effectively solve the problems of traffic travel and management.The existing short-term traffic flow prediction researches mainly use the shallow mo-del method,so they cannot fully reflect the traffic flow characteristics.Therefore,this paper proposed a short-term traffic flow prediction method based on DCGRU-RF model for complex traffic network structure.The DCGRU network is used to characte-rize the spatio-temporal correlation features in the traffic flow time series data.After obtaining the dependencies and potential features in the data,the RF model is selected as the predictor,and the nonlinear prediction model is constructed based on the extracted features,and finally getting the prediction result.In this experiment,38 detectors in two urban roads were selected as experimental objects,traffic flow data of five working days were selected,and the proposed model was compared with other common traffic flow prediction models.The experimental results show that DCGRU-RF model can further improve the prediction accuracy,the accuracy can reach 95%.
Multi-label Learning Algorithm Based on Association Rules in Big Data Environment
WANG Qing-song, JIANG Fu-shan, LI Fei
Computer Science. 2020, 47 (5): 90-95.  doi:10.11896/jsjkx.190300150
Abstract PDF(1446KB) ( 102 )   
References | Related Articles | Metrics
In the traditional single-label mining technology research,each sample belongs to only one label and the labels are mutually exclusive.In the multi-label learning problem,one sample may correspond to multiple labels,and each label is often asso-ciated with each other.At present,the research on the correlation between tags gradually becomes a hot issue in multi-label lear-ning research.Firstly,in order to adapt to the big data environment,the traditional association rule mining algorithm Apriori is parallelized and improved.The Hadoop-based parallelization algorithm Apriori_ING is proposed to realize the generation of the candidate set,the pruning and the support number statistics,and the parallelization.The advantage is that the frequent itemsets and association rules obtained by the Apriori_ING algorithm generate tag sets,and the inference engine based tag set generation algorithm IETG is proposed.Then,the label set is applied to multi-label learning,and a multi-label learning algorithm FreLP is proposed.FreLP uses association rules to generate a set of labels,decomposes the original set of labels into multiple subsets,and then uses the LP algorithm to train the classifier.FreLP was compared with the existing multi-label learning algorithms.Experiment results show that the proposed algorithm can obtain better results under different evaluation indicators.
Event Detection Method Based on Node Evolution Staged Optimization
FU Kun, QIU Qian, ZHAO Xiao-meng, GAO Jin-hui
Computer Science. 2020, 47 (5): 96-102.  doi:10.11896/jsjkx.190400072
Abstract PDF(1737KB) ( 59 )   
References | Related Articles | Metrics
Link prediction technology is an effective method to analyze network evolution,it also provides a new idea to detect social events.Now,most of the event detections using link prediction start from the macroscopic overall network evolution.Although there are a few detection methods that combine node evolution,the stability of them are not good,and the sensitivity to the event are not high enough to accurately detect the occurrence of the event.Therefore,an event detection method based on node evolution staged optimization (NESO_ED) was proposed.Firstly,the stability of event detection is enhanced by a staged optimization method,and an array of node index weights is obtained.Then,according to different rules,the optimal similarity calculation index of the node is selected,so that the node can better quantify the network evolution and improve the sensitivity of event detection.In addition,the changes of indicators that selected by nodes in the process of network evolution were also analyzed.It reveals different effects of events on the evolution of nodes.On real social network VAST,the event detection sensibility of NESO_ED is increased by 227% compared with LinkEvent and 63% compared with NodeED.The stability of NESO_ED is also increased by 66% compared with NodeED,which shows that NESO_ED can detect events more accurately and stably.
Imbalance Data Classification Based on Model of Multi-class Neighbourhood Three-way Decision
XIANG Wei, WANG Xin-wei
Computer Science. 2020, 47 (5): 103-109.  doi:10.11896/jsjkx.180601099
Abstract PDF(1387KB) ( 71 )   
References | Related Articles | Metrics
Imbalance data classification is an important data classification problem,traditional classification algorithm does not have better classification effect for smaller class in imbalance data.Therefore,this paper proposed an algorithm of imbalance data classification based on multi-class neighbourhood three-way decision.In the case of mixed data and multiple classes,traditional three-way decision is firstly generalized,and the multi-class neighbourhood three-way decision model of mixed data is presented.Then,a setting method of self-adaption cost function is given in the model,and based on this method,the algorithm of imbalance data classification of multi-class neighbourhood three-way decision model is proposed.Simulation experiment results show that the proposed classification algorithm has better classification performance for imbalance data.
Computer Graphics & Multimedia
Survey of Acoustic Feature Extraction in Speech Tasks
ZHENG Chun-jun, WANG Chun-li, JIA Ning
Computer Science. 2020, 47 (5): 110-119.  doi:10.11896/jsjkx.190400122
Abstract PDF(1815KB) ( 88 )   
References | Related Articles | Metrics
Speech isan important means of information transmission and communication,people often use speech as a medium for exchanging information.The acoustic signal of speech contains a large amount of speaker information,semantic information and rich emotional information.Therefore,three different directions of speech tasks,speaker recognition (SR),auto speech recognition (ASR),and speech emotion recognition (SER),are formed.Each of the three tasks uses different techniques and specific methods for information extraction and model design in their respective fields.Firstly,the historical routes of three tasks at the early stage of development at home and abroad were summarized.The development of speech tasks was summarized into four different stages.At the same time,the public phonetics features for three speech tasks were summarized.The focus of each type of feature was explained.Then,with the wide application of deep learning technology in various fields in recent years,the speech task is well developed.The application of the current popular deep learning model in acoustic modeling was analyzed separately.The acoustic features extraction methods and technical routes for three different speech tasks were summarized in two ways,supervised and unsupervised.In addition,a multi-channel fusion model based on attention mechanism for feature extraction was proposed.In order to solve three speech tasks at the same time,a multi-task model based personalized was proposed for speech feature extraction.This paper also proposed a multi-channel cooperative network model.By using this design idea,the accuracy of multi-task feature extraction can be improved.
Sound Recognition and Detection Based on Multi-scale Attention Fusion in Weak LabelEnvironment
ZHENG Wei-zhe, QIU Peng, WEI Juan
Computer Science. 2020, 47 (5): 120-123.  doi:10.11896/jsjkx.190900111
Abstract PDF(2009KB) ( 51 )   
References | Related Articles | Metrics
At present,most of the research on sound recognition and detection is based on the datasets with strong labels.Howe-ver,in real-world sound recognition and detection tasks,it is difficult to obtain strong label audio data due to incomplete audio labels with a large amount of noise,which in turn affects the accuracy of sound identification and detection.To this end,a multi-scale attention fusion mechanism is proposed based on the convolutional cyclic neural network model.This mechanism uses the attention gating unit to make more use of the effective features while reducing the effects of noise in the sound time-frequency map features.At the same time,feature fusion is performed by combining convolution kernels of multiple sizes to further improve the effective extraction of sound features.In addition,the sound signal is identified by a weighting method that combines the results of the frame detection.Finally,using the proposed model,a weak labeled data set containing 17 kinds of urban vehicle sounds is selected from the AudioSet database for detection and identification in the weak label environment.For the test set,the F1 value of sound recognition result is 58.9%,and the F1 value of detection result is 43.7%.The simulation experiments show that the CRNN baseline model used in this paper is more accurate than the traditional sound recognition detection model under the weakly labeled city vehicle sound datasets.And the methods involved in the paper,such as the importance weighted recognition method and multi-scale attention fusion method,can improve the accuracy of the model for sound recognition and detection.
SAR Image Recognition Based on Few-shot Learning
WANG Hang, CHEN Xiao, TIAN Sheng-zhao, CHEN Duan-bing
Computer Science. 2020, 47 (5): 124-128.  doi:10.11896/jsjkx.190400136
Abstract PDF(1987KB) ( 100 )   
References | Related Articles | Metrics
Deep learning has become a research hotspot in the field of image recognition.Different from traditional image recognition methods,deep learning is to automatically learn features from a large amount data and has a strong ability of feature learning and representation.However,under the condition of small samples,the traditional deep learning methods such as convolutional neural network are difficult to learn effective features,resulting in low image recognition accuracy.Thus,a new image recognition algorithm under small samples was proposed to solve the classification and recognition of SAR images.On the basis of convolutional neural network,it combines convolution operation with autoencoder to form a deep convolutional autoencoder network structure.The algorithm firstly preprocesses the image and enhances the image using 2D Gabor filter,and thentrains the model,finally,constructsthe image classification model.The proposed model can automatically learn and extract effective features from small sample images,and improve the recognition accuracy.On 10 categories of target classification of MSTAR data set,10% samples from the training data were selected as new training data,the rest were valid data,and the recognition accuracy of the test data in the convolutional neural network is 76.38%,while that in the proposed convolutional autoencoder is 88.09%.Experimental results show that the proposed algorithm is more effective than convolutional neural network in small sample image recognition.
Shadow Removal of Traffic Surveillance Video by Joint Voting in Spatial-Frequency Domain
SONG Chuan-ming, HONG Xu, WANG Xiang-hai
Computer Science. 2020, 47 (5): 129-136.  doi:10.11896/jsjkx.190400040
Abstract PDF(3712KB) ( 47 )   
References | Related Articles | Metrics
The static or moving shadows in traffic scenes tend to reduce the accuracy of vehicle target tracking.Thus,it is an important step to effectively remove the shadows in the processing of traffic surveillance videos.However,there hardly is an efficient shadow removal method yet at present,which resists both static and moving shadows by simultaneously exploring the spatial and frequency characteristics of shadows.Under such a circumstance,this study proposed a shadow removal method for traffic video by using a joint voting strategy in spatial-frequency domain.The surveillance video is converted from RGB space into HSV space and then performed non-subsampled shearlet transform (NSST).Assuming that NSST coefficients follows the Gaussian distribution,the mean and standard deviation of coefficients is used to compute the weighted mask for each scale.Subsequently,the weighted mask at coarse scale is employed to adjust the mask at fine scale,according to the zerotree characteristics of multiscale coefficients.The weighted masks of different scales and color channels are thus linearly combined to form a unified mask,which is then binarized by an adaptive threshold calculated by the maximum entropy segmentation based on the least square method.Finally,the moving vehicle area after shadow removal is determined by a joint voting strategy by using the weighted frequency-domain mask,the S-channel mask and the V-channel mask respectively.Experimental results show that the proposed algorithm can effectively remove the static and moving shadows in traffic surveillance video.It reduces the average Euclidean distance by 95% between the ideal trajectory and the output vehicle trajectory of traditional mean shift algorithm,suppressing the interfe-rence of shadows.Meanwhile,the proposed algorithm enhances the robustness of the intelligent analysis and avoids the phenomenon of losing the target.Our research indicates that it is conducive to obtaining more accurate shadow removal result to effectively combine the representation of traffic surveillance video in spatial and frequency domains,and to fully explore the differences of texture features and color features between moving vehicle areas and shadow areas.The accuracy of vehicle target tracking will be therefore improved.
Algorithm with Discriminative Analysis Dictionary Learning by Fusing Extreme Learning Machine
WANG Jun-hao, YAN De-qin, LIU De-shan, XING Yu-jia
Computer Science. 2020, 47 (5): 137-143.  doi:10.11896/jsjkx.190600090
Abstract PDF(2512KB) ( 60 )   
References | Related Articles | Metrics
Recent researches have shown that the speed advantage of extreme learning machine (ELM) and the accuracy advantage of discriminative dictionary learning (DDL) in the area of image classification.However these two methods have their respective drawbacks,in general,ELM is known to be less robust to noise while DDL is known to be time-consuming.In order to unify such mutual complementarity and further enhance the classification performance,we propose a discriminative analysis dictionary learning fusing extreme learning machine model in this paper.More precisely,the iterative optimization algorithm is used to learn the most optimal discriminative analysis dictionary and extreme learning machine classifier.In order to verify the effect of the proposed algorithm,the face data is used for classification.Experiments demonstrate that our method achieves a better performance than the state-of-the-art dictionary learning algorithms and extreme learning machine in a variety of image classification tasks.
Quantizing Weights and Activations in Generative Adversarial Networks
ZHENG Zhe, HU Qing-hao, LIU Qing-shan, LENG Cong
Computer Science. 2020, 47 (5): 144-148.  doi:10.11896/jsjkx.190700176
Abstract PDF(2533KB) ( 93 )   
References | Related Articles | Metrics
In recent years,generative adversarial network have shown excellent performance in many computer vision tasks such as ima-ge super resolution,image generation and so on.GANs can be designed to be much more greedy in computation complexity because of huge quantity uses of GPU application.For mobile devices that are resource-limited,however,on which it is intractable for GAN on be deployed due to high consumption both in energy and computation.Thanks for great success in neural network compression,it is possible to deploy GAN on mobile devices.This paper proposed a method to simultaneously quantize weights and activations in GANs.Sensitivity analysis shows that weights are more sensitive than activation in quantization process.This paper used Fréchet Inception Distance (FID)score to evaluate generated images of quantized GANs for Inception score is less applicable than FID.Motivated by sensitivity analysis,extensive experiments were conducted on Mnist and Celeb-A datasets.Results show that the proposed method can compress GANs by up to 4x and still achieve even higher performance than the original GANs.Thus,it effectively resolves the problem of compressing GANs.
Diffuse Interface Based Unsupervised Images Clustering Algorithm
WANG Cheng-zhang, BAI Xiao-ming, DU Jin-li
Computer Science. 2020, 47 (5): 149-153.  doi:10.11896/jsjkx.190300125
Abstract PDF(2490KB) ( 57 )   
References | Related Articles | Metrics
Unsupervised clustering of images aims to partition the whole image set into several subsets on the basis of image data itself,while without any priori information.As dimensionality of an image is usually very high,curse of dimensionality arises du-ring the image processing.Having analyzed the problem of images clustering,a novel unsupervised image clustering algorithm is proposed.The proposed algorithm is based on diffused interface model on graph.Images were encoded as the data points in high dimensional observing space,and then were projected into low dimensional feature space.Diffuse interface model based unsupervised clustering algorithm was constructed in feature space,and dimension reduction operator was introduced into the model.Loop iterative algorithm was employed to optimize the energy function of diffuse interface model.The optimized diffuse interface was adopted to cluster images into different subsets.Experimental results show that the proposed algorithm is superior to traditional K-means,DBSCAN and Spectral Clustering algorithm.It achieves better clustering results and lower error rates.
2D-Otsu Rail Defect Image Segmentation Method Based on WFSOA
CAO Yi-qin, DUAN Ye-yu, WU Dan
Computer Science. 2020, 47 (5): 154-160.  doi:10.11896/jsjkx.190200295
Abstract PDF(2070KB) ( 44 )   
References | Related Articles | Metrics
Aiming at the problem that the two-dimensional maximum inter-class variance threshold method (2D-Otsu) had weak anti-noise and long calculation time,a seeker optimization algorithm based on random weight and asynchronous value factor is proposed.The algorithm is applied to the image segmentation of rail defects in 2D-Otsu.The random weight is used to speed up the convergence speed of the algorithm,and the asynchronous value factor is used to improve the algorithm's search ability,which is conducive to global convergence to the optimal value.Through the test function analysis,the WFSOA algorithm can converge quickly,the precision value of the optimization value is high,the convergence time is small,and the algorithm has good stability.In the image segmentation of rail defects,the 2D-Otsu trace function is used as the objective function of WFSOA.The experimental results show that the image detection has high real-time performance,the segmentation result of the rail defects is clear,and the false detection rate and missed detection rate of the rail defects are effectively reduced.Time is only 2% of the 2D-Otsu algorithm,which meets the needs of actual engineering.
Lightweight Convolutional Neural Networks for Land Battle Target Recognition
QIAO Meng-yu, WANG Peng, WU Jiao, ZHANG Kuan
Computer Science. 2020, 47 (5): 161-165.  doi:10.11896/jsjkx.190300062
Abstract PDF(2366KB) ( 73 )   
References | Related Articles | Metrics
In an actual land battle environment,people cannot carry large computing devices such as GPUs with them.Therefore,it is more difficult to calculate the large-scale neural network parameters,which further leads to the target recognition network not working in real time.To this end,a target recognition algorithm based on lightweight convolutional neural network (E-MobilNet) is proposed.In order to improve the network learning effect,based on the existing target learning framework MobileNet-V2,an ELU function is inserted as an activation function.Firstly,use the expansion convolution to increase the number of channels to get more features to activate and output through the ELU function,which can alleviate the disappearance of the gradient of the linear part,the nonlinear part is more robust to the noise of the input change.Then,the way of the residual connection Combine high-level features with low-level features and then output.Finally,output to Softmax using global pooling.The experimental data shows that compared with the current mainstream lightweight deep learning target recognition algorithm,E-MobileNet has improved the accuracy of recognition and the frame rate per second in the same test environment of the same test set.The experimental data fully demonstrates that the use of the ELU activation function and the global pooling layer reduces the number of parameters,enhances the generalization ability of the model,and improves the robustness of the algorithm.On the basis of ensuring the lightweight of the neural network model,the recognition accuracy of the target is effectively improved.
Retinal Vessel Segmentation Based on Dual Attention and Encoder-decoder Structure
LI Tian-pei, CHEN Li
Computer Science. 2020, 47 (5): 166-171.  doi:10.11896/jsjkx.190400062
Abstract PDF(3173KB) ( 71 )   
References | Related Articles | Metrics
The segmentation of the retinal vessels in fundus image is important for the diagnosis of ophthalmic diseases such as diabetes,retinopathy and glaucoma.Aiming at the difficulties of extracting blood vessels from retinal blood vessel images and the lack of data samples,a retinal vessel segmentation method combining attention module with encoder-decoder structure is proposed.To improve the segmentation effect of retinal blood vessels,a spatial and channel attention module is added to each convolutional layer of the encoder-decoder convolutional neural network to enhance the utilization of the spatial and channel information of the image features (such as the size,shape,and connectivity of the blood vessels),where the spatial attention focuses on the topological characteristics of blood vessels,and the channel attention focuses on the correct classification of blood vessel pixels.Moreover,the Dice loss function is used to solve the imbalance of positive and negative samples in retinal blood vessel images.The proposed method has been applied on three public fundus image databases DRIVE,STARE and CHASE_DB1.The experimental data show that the accuracy,sensitivity,specificity and AUC values are superior to the existing retinal vessel segmentation me-thods,with AUC values of 0.9889,0.9812 and 0.9831,respectively.The experimental results show that the proposed method can effectively extract the vascular network in healthy retinal images and diseased retinal images,and can segment small blood vessels well.
Integration of Internal and External Priors Based on Dirichlet Process Mixture Model
ZHANG Mo-hua, PENG Jian-hua
Computer Science. 2020, 47 (5): 172-180.  doi:10.11896/jsjkx.190400060
Abstract PDF(3164KB) ( 47 )   
References | Related Articles | Metrics
In recent years,Bayesian approaches using Gaussian mixture model as a patch prior has achieved great success in image restoration.Aiming at the shortcomings of fix components and mainly relying on external learning of these models,a new image prior model based on Dirichlet process mixture model is proposed.The model learns external generic priors from a set of external clean images and learns internal priors from a given degraded image.Due to the accumulative property of the statistics in the mo-del,the integration of internal and external priori is naturally achieved in image restoration.Through the add and merge of cluster components,the model complexity can be adaptively changed as the data increases or decreases,more interpretable and more compact models can be learned.In order to solve the variational posterior of all hidden variables,a scalable variational algorithm combining with batch update with birth and merge mechanisms is proposed.The new algorithm improves the traditional coordinate ascent algorithm which is relatively inefficient under large data sets and often falls into the local optima.The effectiveness of the proposed model is verified by image denoising and inpainting experiments where the proposed model has advantage both on objective quality assessments and on visual perception comparing to traditional methods.
Artificial Intelligence
Comprehensive Review of Autonomous Taxi Dispatching Systems
ZENG Wei-liang, WU Miao-sen, SUN Wei-jun, XIE Sheng-li
Computer Science. 2020, 47 (5): 181-189.  doi:10.11896/jsjkx.190400031
Abstract PDF(1930KB) ( 91 )   
References | Related Articles | Metrics
The development of artificial intelligence and big data science has brought about a new generation of industrial revolution.Autonomous taxi dispatching systems based on artificial intelligence,big data,and connected vehicle technology will definitely be one of the most important components of the next generation transportation system.The existing literatures are reviewed and discussed through two aspects of scheduling modeling and system construction problem of the autonomous taxi dispatching system.Particularly,the critical dispatching problems on the sharing modes,sharable routing planning,and pricing strategy for a dynamic and sharable dispatching scheme are discussed.Finally,the advantages of autonomous taxis such as alleviating traffic congestion and reducing resource waste are summarized.The technical difficulties and development prospects of the future auto-nomous taxi system are pointed out.
Analyzing Latent Representation of Deep Neural Networks Based on Feature Visualization
SHANG Jun-yuan, YANG Le-han, HE Kun
Computer Science. 2020, 47 (5): 190-197.  doi:10.11896/jsjkx.190700128
Abstract PDF(3369KB) ( 71 )   
References | Related Articles | Metrics
The working mechanism of deep neural networks can be intuitively uncovered by visualization technique.Visualizing deep neural networks can provide the interpretability on the decision made by the black box model,which is critically important in many fields,such as medical diagnosis and autopilot.Current existing works are mostly based on the activation maximization technique,which optimizes the input,the hidden feature map or the original image,in condition to the neuron that we want to observe.Qualitatively,the change in the input value can be taken as explanation when the neuron has reached nearly the maximum activation value.However,such method lacks the quantitative analysis of deep neural networks.To fill this gap,this paper proposes two meta methods,namely,structure visualization and rule-based visualization.Structure visualization works by visualizing from the shallow layers to the deep layers,and find that neurons in shallow layers learn global characteristics while neurons in deep layers learn more specific features.The rule-based visualization includes intersection and difference selection rule,and it is helpful to find the existence of shared neurons and inhibition neurons that learns the common features of different categories and inhibits unrelated features respectively.Experiments on two representative deep networks,namely the convolutional network VGG and the residual network ResNet,by using ImageNet and COCO datasets.Quantitative analysis shows that ResNet and VGG are highly sparse in representation.Thus,by removing some low activation-value “noisy” neurons,the networks can keep or even improve the classification accuracy.This paper discovers the Latent representation of deep neural networks by visualizing and quantitatively analyzing hidden features,thus providing guidance and reference for the design of high-performance deep neural networks.
Candidate Sentences Extraction for Machine Reading Comprehension
GUO Xin, ZHANG Geng, CHEN Qian, WANG Su-ge
Computer Science. 2020, 47 (5): 198-203.  doi:10.11896/jsjkx.190300154
Abstract PDF(2075KB) ( 52 )   
References | Related Articles | Metrics
The ultimate goal of artificial intelligence is to let machine understand human natural language in cognitive field.Machine reading comprehension raises great challenge in natural language processing which requires computer to have certain common knowledge,comprehensively understand text material,and correctly answer the corresponding questions according to that text material.With the rapid development of deep learning,machine reading comprehension becomes the current hotspot research direction in artificial intelligence,involving core technologies such as machine learning,information retrieval,semantic computing and has been widely used in chat robots,question answering systems and intelligent education.This paper focuses on micro-rea-ding mode,and answer candidate sentences containing answers are extracted from given text,which provide technology support for machine reading comprehension.Traditional feature-based methods consumes lots of manpower.This paper regards candidate sentences extracting as a semantic relevance calculation problem,and proposes an Att-BiGRU/LSTM model.First,LSTM and GRU are used to encode the semantic expressed in a sentence.Then,the dissimilarity and similarity are captured with an Atten structure for semantic correlation. Last, adam optimizer is used to learn the model parameters.Experiment results show that Att-BiGRU model exceeds the baseline method of nearly 0.67 in terms of pearson,16.8% in terms of MSE on SemEval-SICK test dataset,which proves that the combination of the bidirectional and Atten structure can greatly improve the accuracy of the candidate sentences extraction,as well as the convergence rate.
Integrated Optimization of Location Assignment and Job Scheduling in Automated Storage andRetrieval System
TANG Hong-tao, YAN Wei-jie, CHEN Qing-feng, LU Jian-sha, ZHAN Yan
Computer Science. 2020, 47 (5): 204-211.  doi:10.11896/jsjkx.190400042
Abstract PDF(3329KB) ( 73 )   
References | Related Articles | Metrics
To improve the dynamic operation efficiency of single shuttle stacker Automated Storage and Retrieval System (AS/RS),the integrated optimization method of location assignment and job scheduling based on shared location storage and dynamic order picking strategy is proposed.The dynamic shift library optimization is extended to the entire picking life cycle of the warehouse,the mathematical model with minimized total working time required for the stacker to do tasks under single shuttle dual-command cycle is established.The PSO algorithm based on K-Medoids clustering algorithm is designed,K-Medoids algorithm is used to analyze the initial location of the product through the correlation between the product and the order,screen out the range of inferior quality solutions,and the PSO algorithm is used to find the exact solution to the problem based on the class cluster of the solution generated by the K-Medoids class algorithm.The experiments show that considering the transfer case under special circumstances could really improve 20% of the operation efficiency of the warehouse and the solution time of the algorithm could reduce about 66% compare with the traditional PSO algorithm.
Heuristic Algorithms for Two-dimensional Irregular Bin Packing Problem with GuillotineConstraints
ZHANG Xu, WANG Li-li, YANG Bo-tao
Computer Science. 2020, 47 (5): 212-216.  doi:10.11896/jsjkx.190400078
Abstract PDF(2011KB) ( 66 )   
References | Related Articles | Metrics
Aiming at the two-dimensional irregular packing problem with guillotine constraint in the cutting field,the definition of minimum moving distance is given firstly,and a heuristic algorithm based on maximum moving distance is proposed.The algorithm calculates the maximum moving distance needed for a convex polygon to slide into the interior of another convex polygon.The disposable positioning of the layout position avoids using the traditional NFP intersection method,and simulated annealing algorithm is used to optimize the packing process,which greatly reduces the overall time consumption of layout and meets the high requirements on layout results.
Coyote Optimization Algorithm Based on Information Sharing and Static Greed Selection
ZHANG Xin-ming, LI Shuang-qian, LIU Yan, MAO Wen-tao, LIU Shang-wang, LIU Guo-qi
Computer Science. 2020, 47 (5): 217-224.  doi:10.11896/jsjkx.190400039
Abstract PDF(2104KB) ( 57 )   
References | Related Articles | Metrics
Coyote Optimization Algorithm (COA) is a novel intelligent optimization algorithm recently proposed and has great application potential,but it has some problems such as long running time and insufficient search ability.This paper proposes an improved COA,namely COA based on Information sharing and Static greed selection (ISCOA).Firstly,a new information sharing model is constructed and applied to the growth of all coyotes in the subgroup,the difference of the sharing information is larger in the early growth so as to increase the population diversity,and the one is smaller in the late growth to be beneficial to exploitation.Secondly,a new intra-group growth mode is constructed,that is to say,a new growth way is adopted in the early stage,mainly based on the information sharing model,to strengthen the growth process to improve the exploration ability,and the growth method of the original algorithm is kept in the later stage,mainly based on the guidance of the alpha wolf and the cultural trend,to strengthen the exploiting ability.Finally,the intragroup greedy algorithm of the original algorithm is changed into a sta-tic greedy algorithm to improve the stability of the algorithm,realize the parallel calculation of the objective function,and improve the running speed.A large number of experiment results on the complex functions from CEC2017 test set show that,compared with COA,ISCOA obtains the advantage of 23 and 24 of the 29 10-dimensional and 30-dimensional functions respectively,and its average running time is 86.3% and 85.7% of COA's on the 10-dimensional and 30-dimensional functions respectively,and its running time is decreased.Compared with the 7 state-of-the-art algorithms,the average ranking of ISCOA on the 10-dimensional and 30-dimensional functions are 1.48 and 1.69,ISCOA wins 17 and 18 times ranking the first,respectively,and obtains better optimization results.Experimental results on the practical engineering problem show that ISCOA has achieved the best results.These all proved that ISCOA has stronger search ability and more competitive,and that it has better application prospects.
Fatigue Detection System Based on Single Channel EEG Signal
WANG Bo-shi, WU Xiu-cheng, HU Xin-yi, ZHANG Li
Computer Science. 2020, 47 (5): 225-229.  doi:10.11896/jsjkx.190400127
Abstract PDF(2364KB) ( 77 )   
References | Related Articles | Metrics
For sudden death in people with high labor intensity,this paper designs a fatigue detection system based on single-channel EEG to realize accurate judgment of the fatigue level in order to make a timely warning for this kind of people.The system uses the TGAM(ThinkGear AM) to collect the original EEG data,transmits the data to the host computer via Bluetooth,and extracts four basic rhythm components (δ,θ,α,β) of the EEG in the host computer.The relative frequency band energies of some rhythms are used as the EEG features characterizing fatigue state,and Fisher discriminant analysis(FDA) and probabilistic neural network(PNN) are used to classify EEG features.Finally,the evaluation results are given.The experimental results show that the designed single-channel EEG-based fatigue detection system can achieve high accuracy of fatigue state detection.
Vehicular Networking Enabled Vehicle State Prediction with Two-level Quantized AdaptiveKalman Filtering
FENG An-qi, QIAN Li-ping, OUYANG Jin-yuan, WU Yuan
Computer Science. 2020, 47 (5): 230-235.  doi:10.11896/jsjkx.190300155
Abstract PDF(2808KB) ( 46 )   
References | Related Articles | Metrics
With the rapid development of urbanization and motorization,traffic safety issues have been drawing more and more attentions.The accurate prediction of vehicle state based on the data acquired by the vehicular networking system plays an important role in improving the traffic safety in transportation section.This paper proposes a two-level quantized adaptive Kalman filter algorithm (QAKF) based on the auto-regressive moving average (ARMA) model,to predict the vehicle state (i.e.,the moving direction,driving lane,vehicle speed,and acceleration).First of all,a vehicular networking system is developed to acquire the vehicle data by exchanging traffic data among the on-board unit (OBU) and the roadside unit (RST).Then,the vehicle state is predicted at the edge cloud server equipped at the roadside unit.Finally,the edge cloud server broadcasts the predicted state to other roadside units for other vehicles at the intersection to obtain vehicle information.The numerical results verify the effectiveness of the auto-regressive moving average model used for predicting acceleration.And,this paper evaluates the efficiency of the proposed algorithm.Compared with the other three prediction algorithms,the proposed algorithm can improve the speed prediction accuracy by 90.62%,89.81% and 82.76%,respectively,which implies that this algorithm can effectively predict the vehicle state in vehicular networks.
Improved Fruit Fly Algorithm for Photovoltaic MPPT Control Strategy
FU Zi-yi, CHENG Bing, SHAO Lu-lu
Computer Science. 2020, 47 (5): 236-241.  doi:10.11896/jsjkx.190400096
Abstract PDF(2359KB) ( 26 )   
References | Related Articles | Metrics
Local shading will reduce the efficiency of photovoltaic power generation system.Under local shading conditions,the output power characteristic curve of photovoltaic system will produce multiple peaks.The traditional maximum power tracking method does not have the ability of global search,and will fail in multi-peak maximum power tracking.Fruit fly optimization algorithm (FOA) has the ability of global optimization,but in the solution process,there are some problems such as slow convergence speed,low convergence accuracy and easy convergence to local optimum.This paper improves fruit fly algorithm,and proposes a Lévy-FOA algorithm combined with adaptive Lévy flight step size.This algorithm makes full use of the characteristics of Lévy flight non-uniform random walk and introduces adaptive step adjustment factor.It improves the position updating method of the original algorithm,improves the convergence speed and accuracy of the algorithm,and avoids the algorithm falling into local extremum.In this paper,three standard functions are used to analyze the convergence of adaptive Lévy-FOA algorithm,and the results are compared with those of conventional FOA algorithm and adaptive improved learning factor particle swarm optimization (APSO).The comparison results show that compared with FOA algorithm and APSO algorithm,the average tracking time of adaptive Lévy-FOA algorithm is significantly reduced,and the average convergence accuracy is improved by four orders of magnitude.Finally,the adaptive Lévy-FOA algorithm is applied to the maximum power tracking of photovoltaic system.The simulation results show that under different illumination conditions,the adaptive Lévy-FOA algorithm can achieve maximum power tracking with fewer iterations,and can achieve more than 90% of the maximum powerafter the first iteration.Compared with other algorithms,the adaptive Lévy-FOA algorithm has shorter tracking time and higher tracking accuracy,and has superior practical optimization ability,which can improve the output efficiency of photovoltaic system.
Computer Network
Research Progress on Key Technologies of Data Storage Based on Wireless Sensor Networks inWide-Area Complex Fluid Systems
ZHANG Jie, LIANG Jun-bin, JIANG Chan
Computer Science. 2020, 47 (5): 242-249.  doi:10.11896/jsjkx.190400025
Abstract PDF(2238KB) ( 61 )   
References | Related Articles | Metrics
The fluid system,including the urban water supply pipe networks and the natural gas supply pipe networks,is an infrastructure with important economic and social values.They have the characteristics of wide geographical distribution,complex structure,large scale and difficult to detect.When leakage,pallution and other abnormalities occur,it's difficult to quickly find and accurately locate.With the development of sensor technology,communication technology,micro-electromechanical technology,using wireless sensor networks to monitor the system has become a research hotspot.Due to difficult communication in the fluid system,the data is difficult to transmit to the user in real time after being monitored,and can only be temporarily stored on the sensor node (referred to as a node),waiting for a suitable moment to upload.However,nodes have the characteristics of small size and easy damage,small storage capacity,weak communication capability,and limited energy.How to reliably store large amounts of data is a difficult problem.At present,some work has been done on this issue.In order to understand the progress of research in this field,this paper have carried out detailed analysis,comparison,summarization and summary of related work,and learned their advantages and disadvantages,and discussed the future research direction.
D2D Multicast Content Sharing Scheme Based on Physical-Social Awareness and PaymentIncentive
FU Qin-xue, AO Liang, YANG Lian-xin, WU Yan
Computer Science. 2020, 47 (5): 250-259.  doi:10.11896/jsjkx.190400143
Abstract PDF(2416KB) ( 35 )   
References | Related Articles | Metrics
Multimedia services,especially online video services,are explosively developing. D2D(Device-to-Device) multicast content sharing is considered as a key technology that can handle massive data delivery.However,most of the current researches on D2D multicast content sharing focus on how to improve the energy efficiency of the system,while there are few researches on the data rate sum of the system,which is an important index to reflect whether the system can efficiently distribute content.In order to establish a user model which is closer to the actual scene and implement efficient content delivery to alleviate the burden of Base Stations and improve the utilization efficiency of resources (spectrum and energy),this paper proposes a kind of D2D multicast content sharing scheme based on physical-social awareness and pay incentive.Firstly,D2D multicast communication is mode-led according to the limitations of the actual scene,and the application scene of the model is expanded to the “hot spot” area with content sharing at high data rate where people are concentrated and the “blind spot” area at which the data cannot be easily transmitted directly by Base Stations in earthquake relief operations.Then,in order to effectively reduce the load of Base Stations and to cope with huge amounts of data delivery,this paper puts forward the optimization problem that the system equivalent data rate sum is regarded as an objective function under multiple constraints.In the objective function,the payment mechanism is introduced to encourage users to provide shared content for other users as cluster heads,and social ties based on similarity of interest are introduced to reduce user payment cost and improve resource utilization efficiency.Finally,a cluster head selection-cluster formation algorithm is proposed to solve this problem.In the cluster head selection algorithm,social ties based on similarity of user interest is introduced while considering the limit of user data rate threshold.In the algorithm of cluster formation,a coalition formation game of centralized control is adopted,in which the gain definition is highly consistent with the connotation of “coalition”.The simulation results show that the performance of the proposed scheme on the equivalent data rate sum and actual data rate sum is significantly improved compared with the relevant similar scheme,and it is also proved that the proposed scheme issuita-ble for large-scale user networks.
Improvement of LZW Algorithms for Wireless Sensor Networks
NI Xiao-jun, SHE Xu-hao
Computer Science. 2020, 47 (5): 260-264.  doi:10.11896/jsjkx.190400108
Abstract PDF(1550KB) ( 46 )   
References | Related Articles | Metrics
In wireless sensor network communication,sensor data need to be sent to the host computer through the wireless device.With the increase of the amount of data needed to be transmitted by the terminal sensors,the energy consumption of wireless devices is gradually increasing.A complex environment that is not convenient for timely maintenance lead to premature failure of wireless communication equipment and communication interruption.Therefore,it is necessary to compress the data collected by the sensor to reduce the amount of data sent.Based on the analysis of sensor data characteristics and traditional LZW (Lempel-Ziv-Welch) compression algorithm,an improved LZW algorithm for wireless sensor network applications is proposed.Firstly,the algorithm preprocesses the adjacent data collected from sensor to calculate the difference,so as to improve the repetition rate of the data items.Then,the appropriate dictionary size is selected and the traditional order memory is replaced with the hash memory in the dictionary,so as to improve the way of dictionary updating.When the compression rate is decreased,the proposed algorithm updates the dictionary,and saves the common single character to release the dictionary space,for data compression.The experimental results show that compared with traditional LZW algorithm,the improved LZW algorithm reduces the compression rate of the ordered sensor data by up to 40%,and reduces the amount of data needed to send.The compression speed is also increased by nearly ten times,which proves that the improved LZW algorithm for wireless sensor network applications is effective and feasible.
Link Prediction Method Based on Weighted Network Topology Weight
YUAN Rong, SONG Yu-rong, MENG Fan-rong
Computer Science. 2020, 47 (5): 265-270.  doi:10.11896/jsjkx.190600031
Abstract PDF(2214KB) ( 59 )   
References | Related Articles | Metrics
In recent years,with more and more attention drawning to link prediction in complex networks,and with the application of link prediction becoming increasingly extensive,a crucial question is raised on how to improve the accuracy of link prediction.Many proposals are made,among which the weighted similarity indices have already achieved a promising result.However,the traditional weighted network link prediction only considers the natural weight of the link neglects the influence of the topologi-cal weights on prediction accuracy.Therefore,aiming at the weighted networks,this paper takes the clustering and diffusion characteristics of edges into consideration and regard them as the topological weights of edges,and consequently recommended four similarity indices based on the topology weight of links,namely WCD-CN,WCD-AA,WCD-RA,and WCD-LP.This paper takes Matlab as the experimental platform and carries out experiments on two weighted datasets(USAir,Bibble) and two weightless datasets(Pblogs and Dolphins),in which AUC is used as the evaluation index.The results of the simulation indicate that compared with two weighted indices,which are based on natural weight and cluster coefficient respectively,the proposed algorithm has higher accuracy in prediction.
Direction-of-arrival Estimation with Two-dimensional Sparse Array Based on Atomic NormMinimization
LU Ai-hong, GUO Yan, LI Ning, WANG Meng, LIU Jie
Computer Science. 2020, 47 (5): 271-276.  doi:10.11896/jsjkx.191200139
Abstract PDF(2074KB) ( 79 )   
References | Related Articles | Metrics
Direction-of-arrival (DOA) estimation based on two-dimensional planar sparse array is increasingly important in the application of massive MIMO arrays of 5G.The gridless sparse reconstruction technology promotes the development of DOA estimation research,and the super-resolution of DOA estimation methods has been advanced with the atomic norm theory.In this paper,DOA estimation is studied when spectrally-sparse signals from multiple directions are incidented on a two-dimensional sparse array.In order to accurately identify the azimuth and elevation angles of all incident signals in pairs,a two-dimensional atomic norm approach based on multiple measurement vectors (MMV) is proposed,and can be solved by semidefinite programming.The proposed algorithm extends compressive sensing of two-dimensional DOA estimation from a single measurement vector to multiple measurement vectors,so as to effectively use the joint sparsity of MMV.Numerical simulation results show that,as the MMV vector grows,the number of identifiable sources increases,the proportion of physical sensors in the sparse array decreases to 30%,the DOA estimation error decreases significantly,and the proposed algorithm can achieve a good convergence effect when the signal-to-noise ratio increases.
Edge Computing-oriented Storm Edge Node Scheduling Optimization Method
JIAN Cheng-feng, PING Jing, ZHANG Mei-yu
Computer Science. 2020, 47 (5): 277-283.  doi:10.11896/jsjkx.190600048
Abstract PDF(1167KB) ( 55 )   
References | Related Articles | Metrics
Edge computing has the demands of high real-time and big data interactive processing.The key problems of edge computing performance are long scheduling,high communication latency and unbalanced load among the edge heterogeneous nodes.Traditional cloud computing platforms are difficult to meet these new requirements.This paper focuses on the scheduling optimization method of Storm edge nodes in the edge computing environment.Firstly,a Storm task offloads scheduling model for edge computing is established.And then a heuristic dynamic programming algorithm is put forward to realize real-time dynamic allocation of topological tasks among edge heterogeneous nodes.By changing the sorting method of the Task instance and the mapping relationship between the Task instance and the Slot,the global optimization scheduling is achieved.Aiming at the problem that the concurrency of topological tasks may be greater than the maximum depth of the JVM stack,a scheduling strategy based on bat algorithm is put forward,the global scheduling scheme is calculated according to the information of the topology task and the CPU information of the edge node.Experiments show that compared with the current Storm scheduling algorithm,the proposed algorithm has an average increase of about 60% in the CPU utilization metrics of the edge node and an average increase of about 8.2% in the throughput metrics of the cluster.Therefore,the proposed algorithm can meet the high real-time processing requirements between edge nodes better.
Information Security
System Safety Analysis Tool for SysML and Case Study
TANG Hong-ying, HU Jun, CHEN Shuo, SHI Meng-ye
Computer Science. 2020, 47 (5): 284-294.  doi:10.11896/jsjkx.190600169
Abstract PDF(4679KB) ( 80 )   
References | Related Articles | Metrics
Model based safety analysis method can improve modeling and analysis capabilities of today's complex safety-critical systems.SysML is a kind of informal system functional modeling language widely used in industry.AltaRica is a formal modeling language for system safety analysis.This paper focuses on the current situation of lack of SysML-oriented system safety analysis tools in China,designs and implements a system safety analysis tool for SysML and conducts a case study.Firstly,the mapping rules of SysML design model to AltaRica analysis model are established and an algorithm is established to realize the automatic conversion of these two models.This paper also integrates an analysis engine of Altarica to analyze the safety of system model.Finally,a complex wheel brake system in SAE-AIR6110 standard is used as an example to verify the feasibility and effectiveness of the tool.The experimental result shows that for this complex system with 25 component types and 34 component instances,the tool can effectively convert the SysML model to the AltaRica model and perform correct safety analysis.
Vulnerability Detection Using Bidirectional Long Short-term Memory Networks
GONG Kou-lin, ZHOU Yu, DING Li, WANG Yong-chao
Computer Science. 2020, 47 (5): 295-300.  doi:10.11896/jsjkx.190800046
Abstract PDF(1809KB) ( 55 )   
References | Related Articles | Metrics
With the continuous development of the application of computer technology,the number and demand of software continue to increase,and the difficulty of development is constantly escalating.Code reuse and the complexity of the code itself have inevitably introduced a number of vulnerabilities in software.These vulnerabilities hidden in massive code are hard to find.But once they are exploited by people,it will lead to irreparable economic losses.In order to discover software vulnerabilities in time,firstly,this paper extracts the method body from the source code to form a method set,and then constructs an abstract syntax tree for each method in the method set.The statements in the method are extracted by means of the abstract syntax tree to form a statement set.The customized variable name,method name and string with some uniform identifiers are replaced.A separate node number is assigned to each statement to form a node set.Secondly,data flow and control flow analysis are used to extract data dependencies and control dependencies between nodes.Then,the node set extracted from the method body,the inter-node data dependency relationship and control dependency relationship are combined into a feature representation corresponding to the method,and further processed into a feature matrix by using one-hot encoding.Finally,each matrix is labeled with a vulnerability tag to generate training samples,and a neural network is used to train the corresponding vulnerability classification model.In order to learn the context information of the sequence better,the BiLSTM network is selected and the Attention layer is added to further improve the performance of the model.In the experiment,the accuracy and recall rate of the vulnerability detection results reach 95.3% and 93.5% respectively,which confirmes that the proposed method can detect the security vulnerabilities in the code more accurately.
RSU-based Assisting Ring Formation Scheme in VANET
ZHANG Hao, CAI Ying, XIA Hong-ke
Computer Science. 2020, 47 (5): 301-305.  doi:10.11896/jsjkx.190400119
Abstract PDF(2161KB) ( 37 )   
References | Related Articles | Metrics
A vehicular ad-hoc network (VANET) makes the transportation system more intelligent and efficient.But due to the open wireless channel and the high-speed movement of vehicles,VANET has privacy leakage issues such as identity,transmission data and location.For the issue of identity privacy leakage of VANET,the existing researches use ring signature increasingly.However,how the vehicle forms a ring with the surrounding vehicles has always been a difficult issue to solve during the moving of vehicles.Therefore as for the area where the infrastructure is deployed well,a RSU-based assisting ring formation scheme is proposed.The public keys of the vehicles in the coverage area are collected by the RSU,thus determining the public key set and broadcasteds it to the vehicles in the area.And the use of bilinear pair mapping achieves the identity-based encryption process of the message transmission between RSU and vehicles.According to security analysis and experiments,the scheme can have better efficiency and security in areas with better infrastructure.
Network Security Configuration Generation Framework Based on Genetic Algorithm Optimization
BAI Wei, PAN Zhi-song, XIA Shi-ming, CHENG Ang-xuan
Computer Science. 2020, 47 (5): 306-312.  doi:10.11896/jsjkx.190500038
Abstract PDF(2083KB) ( 37 )   
References | Related Articles | Metrics
It is an important task in network security management to configure network security equipment reasonably and enforce access controls upon the information systems.With the increase of network size,there will be complex inter-dependent relationships among user privileges.Traditionally,access control lists are always generated manually according to the business requirements under the principle of least privilege,where the inter-dependent relationships are neglected.The network users may be granted with more privileges than they deserve,which may introduce vulnerabilities to network security.In this paper,a security configuration generation framework based on genetic algorithm optimization was proposed.Firstly,the framework extracts the user privilege information and network semantic information based on the network planning information and configurations information.And a network security risk assessment model is used to assess the network risk under different security configuration.Then,all possible access control configurations are encoded as genes.And initial population are generated based on the pre-determined genetic operators and super parameters.Finally,a better individual is generated according to the genetic algorithm.The framework cannot only compare the network security risks under different security configurations,but also search for the optimal solution of security configuration within the possible configuration space,thus realizing the automatic generation of network security device access control strategy.The framework is validated by constructing a simulated network environment with 20 devices and 30 services.In this simulation environment,the framework can find a better security configuration with no more than 10 generations of iteration under the condition of 150 population samples.Experimental data show that the framework can automatically generate reasonable network security configuration according to network security requirements.
Role Dynamic Adjustment Algorithm for Resisting Insider Threat
PAN Heng, LI Jing feng, MA Jun hu
Computer Science. 2020, 47 (5): 313-318.  doi:10.11896/jsjkx.190800051
Abstract PDF(1438KB) ( 120 )   
References | Related Articles | Metrics
Due to derivations in current role definition from the changes of the bossiness process and information infrastructure,organizations are vulnerable to internal threat.A role dynamic adjustment algorithm is proposed based on the defensive idea of changing the set of roles within the organization regularly and reasonably.The algorithm defines an objective function with adjusting parameters to balance the two elements,which are the user privilege actual use data and the system administrator expert knowledge.Based on heuristic search strategy and sub-set pairing technique,a group of candidate roles are obtained.From these roles,a set of adjusting roles which can achieve a predefined score are obtained,by using a certain heuristic function.Finally,in order to reduce role redundancy,the users of the organization are reassign the roles from the adjusting roles,so getting a new Role-Based Access Control(RBAC) configuration.By using the audit logs from a hospital management system,the performance of the RDA is analyzed.The experimental results show that the proposed algorithm can efficiently adjust the RBAC configuration for the special organization,so it can provide concrete base for resisting the insider threats.