Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 47 Issue 1, 15 January 2020
Computer Architecture
High Performance Computing and Astronomical Data:A Survey
WANG Yang, LI Peng, JI Yi-mu, FAN Wei-bei, ZHANG Yu-jie, WANG Ru-chuan, CHEN Guo-liang
Computer Science. 2020, 47 (1): 1-6.  doi:10.11896/jsjkx.190900042
Abstract PDF(2333KB) ( 937 )   
References | Related Articles | Metrics
Data is an important driver of astronomical development.Distributed storage and High Performance Computing (HPC) have an positive effect on the complexity,irregular storage and calculation of massive astronomical data.The multi-information and multi-disciplinary integration of astronomical research has become inevitable,and astronomical big data has entered the era of large-scale computing.HPC provides a new means for astronomical big data processing and analysis,and presents new solutions to problems that cannot be solved by traditional methods.Based on the classification and characteristics of astronomical data,and supported by HPC,this paper studied the data fusion,efficient access,analysis and subsequent processing,visualization of astronomical big data,and summarized the current situation.Furthermore,this paper summarized the technical characteristics of the current stage,put forward the research strategies and technical methods for dealing with astronomical big data,and discussed the problems and development trends of the processing of astronomical big data.
Research on Locality-aware Design Mechanism of State-of-the-art Parallel Programming Languages
YUAN Liang,ZHANG Yun-quan,BAI Xue-rui,ZHANG Guang-ting
Computer Science. 2020, 47 (1): 7-16.  doi:10.11896/jsjkx.181202409
Abstract PDF(1560KB) ( 460 )   
References | Related Articles | Metrics
The memory access locality of a parallel program becomes a more and more important factor for exploiting more performance from the more and more complex memory hierarchy of current multi-core processors.In this paper,two different kinds of locality concept,horizontal locality and vertical locality,were proposed and defined.The state-of-the-art of parallel programming languages were investigated and analyzed,while the methods and mechanisms on how these parallel programming languages describe and control the memory access locality were analyzed in detail based on these two kinds view of horizontal locality and vertical locality.Finally,some future research directions on parallel programming languages were summarized,especially on the importance of integrating and support both horizontal locality and vertical locality in the future parallel programming language research.
Large-scale High-performance Lattice Boltzmann Multi-phase Flow Simulations Based on Python
XU Chuan-fu,WANG Xi,LIU Shu,CHEN Shi-zhao,LIN Yu
Computer Science. 2020, 47 (1): 17-23.  doi:10.11896/jsjkx.190500009
Abstract PDF(2190KB) ( 534 )   
References | Related Articles | Metrics
Due to the plenty of third-party libraries and development productivity,Python is becoming increasingly popular as a programming language in areas such as data science and artificial intelligence.Python has been providing fundamental support for scientific and engineering computing.For example,libraries such as NumPy and SciPy provide efficient data structures for multi-dimensional arrays and rich numerical functions.Traditionally,Python was used as a script language,gluing preprocessors,solvers and postprocessors and enhancing automation in numerical simulations.Recently,some foreign researchers implement their solvers using Python and parallelize their Python codes on high performance computers,with impressive results achieved.Because of its intrinsic features,implementation and optimization of high performance large-scale numerical simulations with Python are quite different with traditional language such as C/C++ and Fortran.This paper presented a large-scale parallel open source 3D Lattice Boltzmann multi-phase flow simulation code PyLBMFlow with Python,and investigated large-scale parallel computing and performance optimization for Python numerical applications.It designed LBM flow data structures and computational kernels with NumPy multi-dimensional arrays and universal functions.Through a range of optimization including reconstruction of boundary processing,it dramatically improves the efficiency of Python computing by about 100X,compared with the baseline version,on a CPU core.Furthermore,this paper designed a 3D decomposition method and implement hybrid MPI+OpenMP parallelization using mpi4py and Cython.Tests for 3D multi-phase(liquid and gases) problem(about 10 Billion lattices) simulating drop impact with gravity effect using D3Q19 Lattice Boltzmann discretization and Shan-Chen BGK single relaxation time collision model were presented,achieving a weak parallel efficiency of above 90% in going from 64 to 1024 compute nodes.
Research on Adaptation of CFD Software Based on Many-core Architecture of 100P Domestic Supercomputing System
LI Fang,LI Zhi-hui,XU Jin-xiu,FAN Hao,CHU Xue-sen,LI Xin-liang
Computer Science. 2020, 47 (1): 24-30.  doi:10.11896/jsjkx.181102176
Abstract PDF(3218KB) ( 367 )   
References | Related Articles | Metrics
Domestic many-core super computing system provides two program languages with different program difficulty.Adaptation to many-core architecture of CFD software decides which program language should be used.Firstly,this paper briefly introduced the many-core architecture,program model and program languages.And then challenges on the adaptation of CFD software were analyzed,including data relativity of implicit method,solving of big parse linear equations,many grid method and unstructured grids.For each challenge,corresponding countermeasure was provided too.At last,the paper provided the speedup ratio of some typical software of fluid dynamics based on theory analysis and experiments.Facts prove that most CFD softwares adapt well to domestic many-core architecture and can use simple program language to get better parallel ration on million cores.
High-performance Implementation Method for Even Basis of Cooley-Tukey FFT
GONG Tong-yan,ZHANG Guang-ting,JIA Hai-peng,YUAN Liang
Computer Science. 2020, 47 (1): 31-39.  doi:10.11896/jsjkx.190900179
Abstract PDF(2671KB) ( 393 )   
References | Related Articles | Metrics
Fast Fourier transform (FFT) is one of the most important basic algorithms,which is widely used in scientific calculation,signal processing,image processing and other fields.With the further improvement of real-time requirements in these application fields,fast Fourier transform algorithms are facing higher and higher performance requirements.In the existing FFT algorithm library,the solution speed and calculation accuracy of FFT algorithm are limited to a certain extent,and few researchers put forward corresponding optimization strategies and conducted in-depth research on the implementation of cooley-tukey fast Fourier transform based on even Numbers.Based on this,this paper put forward a set of for even basis of optimization strategy and methodfor Colley-Turkey fast Fourier transform.Firstly,a friendly butterfly network supporting SIMD mixed is constructed.Secondly,according to the even base rotation factor characteristics,the complexity of the butterfly calculation is reduced to a maximum degree.Thirdly,through the SIMD assembly optimization,assembly instruction rearrangement and selection,register allocation strategy and high performance matrix transpose algorithm method,the application is optimized .Finally a high performance FFT algorithm library is achieved.Currently,the most popular and widely used FFT are FFTW and Intel MKL.Experimental results show that on X86 computing platform,the performance of FFT library based on cooley-tukey FFT is better than MKL and FFTW.The high performance algorithm is put forward by the new optimization method and implementation technology system,which can be generalized to other except the even base based on the realization and optimization of a certain basis for further research and development work,to break through the FFT algorithm performance bottlenecks in the hardware platform,to achieve a high performance FFT algorithms library for a specific platform.
Computer Science Theory
Uncertainty Principle as Related to Quantum Computation
Renata WONG
Computer Science. 2020, 47 (1): 40-50.  doi:10.11896/jsjkx.190400432
Abstract PDF(1494KB) ( 342 )   
References | Related Articles | Metrics
The high expectations regarding the computational potential of quantum computation stem from quantum mechanical features,such as the principle of superposition,the phenomenon of entanglement,the destructive and constructive interference.Besides the presumed advantages of quantum computation over classical computation,there exist impediments that appear to be affecting the former but not the latter.One of them are the two uncertainty principles traditionally ascribed to Werner Heisenberg.The uncertainty principle formulated originally by Heisenberg pertains to the inability of measuring a quantum system with non-quantum instruments without affecting it.This principle is different from the later development postulating an inherent inability of non-commuting observables to be measured precisely.At present state of technological development and within the current formulation and interpretation of quantum mechanics,both versions of the uncertainty affect the speed attainable by a quantum computer.Recently,the two uncertainty principles have received more attention.In his improvement to Heisenberg’s principle,Ozawa took into account both types of uncertainty mentioned above.Furthermore,research into entropic uncertainty has shown that Heisenberg’s uncertainty can be seen as a lower bound of Hirschmann’s uncertainty,thereby indicating that quantum computation may need to consider other types of uncertainties,such as information uncertainty,as well.
Axiomatizing Covariation-Contravariation Simulation Under GSOS Operators
LI Su-ting,ZHANG Yan
Computer Science. 2020, 47 (1): 51-58.  doi:10.11896/jsjkx.181102026
Abstract PDF(1448KB) ( 120 )   
References | Related Articles | Metrics
The behavioral theory of processes is one of the core research contents of process calculus,which focuses on discussing behavioral equivalence or simulation relationships between processes.The Covariant-Contravariant simulation (CC-simulation) is the extension of the (bi)simulation,which distinguishes the types of actions and expresses the different requirement of active,passive and communication actions in refinement relationships between specifications and implementations.The (pre)congruence and axiomatization of behavioral relationships are the concentrated expression of the algebraic features of process calculus,which are essential for the analysis and reasoning of specifications and implementations.In general,the proof of behavioral (pre)congruence and the construction of axiomatic systems need to be based on the Structural Operational Semantics (SOS) of different process calculus systems.In order to avoid duplication of labor in these kinds of research work,the academia has carried out research on the meta-theory of the generalized SOS rule formats,and GSOS is one of the format that have been widely studied.Based on the consideration of action types,this paper extends the GSOS rule format for CC-simulation,proposes CC-GSOS rule format,and proves that the CC-simulation is a precongruent relation relative to CC-GSOS operators,gives a general method for constructing axiomatic system for CC-simulation which is sound and complete.
Contact Map-based Residue-pair Distances Restrained Protein Structure Prediction Algorithm
XIE Teng-yu,ZHOU Xiao-gen,HU Jun,ZHANG Gui-jun
Computer Science. 2020, 47 (1): 59-65.  doi:10.11896/jsjkx.181202395
Abstract PDF(2519KB) ( 143 )   
References | Related Articles | Metrics
De novo prediction is an important method for protein structure modeling.Research of the method contributes to humanity’s understanding of protein functions to conduct drug design and disease treatments.In order to improve the accuracy of prediction,contact map-based residue-pair distances restrained protein structure prediction algorithm (CDPSP) was proposed.Based on the framework of evolutionary algorithm,CDPSP was used to sample conformational space,which was divided into exploration and exploitation stages.In the exploration stage,the strategies of mutation and selection were designed on the basis of the distances of residue-pair,which can increase the diversity of the population.In detail,a residue-pair was chosen according to the contact probability of contact map and the mutation was conducted through fragment assembly technique on the adjacent region of the residue-pair.The selection of mutated conformation was determined by the expected probability distributed through the discretization of residue-pair distances.In the exploitation stage,the contact-based score and energy function were used to evaluate the quality of conformations in search of good conformations,which can enhance the sampling ability of CDPSP in near-native region.In order to verify the performance of the proposed algorithm,CDPSP is tested on 10 targets in the FM group of CASP12 and compared with advanced algorithms.The test results show that CDPSP can predict more accurate protein tertiary structure models.
Chaotic Activity Filter Method for Business Process Based on Log Automaton
LI Juan,FANG Xian-wen,WANG Li-li,LIU Xiang-wei
Computer Science. 2020, 47 (1): 66-71.  doi:10.11896/jsjkx.181102110
Abstract PDF(1800KB) ( 146 )   
References | Related Articles | Metrics
Business process event logs sometimes contain chaotic activities,which are a kind of activity independent of process state and free from process constraints,and may happen anytime and anywhere.The existence of chaotic activities can seriously affect the quality of business process mining,so filtering chaotic activities becomes one of the key contents of business process management.At present,the filtering method of chaotic activity mainly filters infrequent behavior from the event the log,and the filtering method based on high frequency priority is not effective in filtering chaotic activities in the log.In order to solve the above problems,a method based on log automata and entropy is proposed to filter chaotic activities in logs.Firstly,a suspicious chaotic activity set with high entropy is obtained by calculating the direct preset rate and direct posterior set rate of activity.Then,the log automata is constructed from the event log.From the log automata model,the intersection of the activity set of infrequent arc and the activity set of high entropy in the log is calculated to obtain the chaotic activity set.Finally,the conditional occurrence probability and behavior profile are used to determine the dependence between the chaotic activity and other activities,so as to decide whether to delete the chaotic activity completely in the log or to keep the chaotic activity in the correct position in the log to delete other activities.The effectiveness of the method is verified by case analysis.
Quick Algorithm to Find an Approximate Maximum Clique in Undirected Graph by Using Local-limited Search Strategy
ZHONG Mao-sheng,JIANG Chao,TAO Lan,HE Xiong,LUO Yuan-sheng
Computer Science. 2020, 47 (1): 72-78.  doi:10.11896/jsjkx.181102109
Abstract PDF(1449KB) ( 259 )   
References | Related Articles | Metrics
Finding the maximum clique is a well-known NP-complete problem,and the classical algorithms of finding the maximum clique basically use the exact search strategy.Because of the complexity of the NP-complete problem,these algorithms may only be applicable to some special small scale graphs,which are difficult to applied to the complex graphs with large scale vertices and edges.In this paper,aiming at the problem of executing much redundant and ineffective search after finding a maximum clique by using these classical algorithms,this paper proposed a new algorithm based on the partition & recursive and the local-limited search strategy to find a maximum clique in an undirected graph,that is,partitioning a graph into an adjacent sub-graph and suspended sub-graph,then finding recursively the maximum clique of the adjacent sub-graph and that of parts of sub-graph of the suspended sub-graph by setting a search range control function.The experiments in a benchmark data set DIMACS show that this algorithm can find the maximum clique in most of the experimental data,can find the approximate maximum clique in other experimental data that is very close to the maximum clique,and it is much faster than these classical algorithm.Therefore,in many cases with non-precise requirements for the maximum clique,this algorithm has better application value.
Database & Big Data & Data Science
K Nearest Neighbors Queries of Moving Objects in Time-dependent Road Networks
ZHANG Tong,QIN Xiao-lin
Computer Science. 2020, 47 (1): 79-86.  doi:10.11896/jsjkx.181102231
Abstract PDF(2656KB) ( 233 )   
References | Related Articles | Metrics
With the wide application of location-based services,object-based query on time-dependent road network has gradually become a research hotspot.In the past,most of the researches only focused on static objects on time-dependent road networks (such as gas stations,restaurants,etc.),and did not take into account the situation of mobile objects (such as taxis).The query of mobile objects has a very wide range of applications in daily life.Therefore,the K nearest neighbor query algorithm TD-MOKNN of moving object is proposed for time-dependent road network.The algorithm is divided into pre-processing stage and query stage.In the pre-processing stage,the road network and grid index are established,and a new mapping method of moving objects to the road network is proposed,which removes the limitation of previous researches that moving objects happen to be on the intersection of the road networks.In the query stage,a new efficient heuristic value is calculated by using inverted grid index,and an efficient k-nearest neighbor query algorithm is designed by using pre-processing information and heuristic value.Experiments verify the effectiveness of the algorithm.Compared with existing algorithm,TD_MOKNN algorithm reduces the number of traversing vertices and response time by 55.91% and 54.57% respectively,and improves the query efficiency by 55.2% on average.
Stepwise Optimized Feature Selection Algorithm Based on Discernibility Matrix and mRMR
FAN Xin,CHEN Hong-mei
Computer Science. 2020, 47 (1): 87-95.  doi:10.11896/jsjkx.181202320
Abstract PDF(2161KB) ( 244 )   
References | Related Articles | Metrics
Classification is a common problem in modern industrial production.The classification efficiency and classification accuracy can be improved effectively by using feature selection to filter useful information before classifying tasks.Considering the maximum correlation between features and class and the minimum redundancy among features,the minimal-redundancy-maximal-relevance(mRMR) algorithm can effectively select feature subset.However,two problems exist in the algorithm mRMR,i.e.,the relatively large deviation of the importance of the features in the middle and later stages of mRMR,and the feature subset is not given directly.A novel algorithm based on the principle of mRMR and discernibility matrix of neighborhood rough set was proposed,to solve these problems.The significance of the feature is defined by employing neighborhood entropy and neighborhood mutual entropy based on the principle of the minimal redundancy and maximal relevance,which can deal the mixed type date better.Dynamic discernibility set is defined based on the discernibility matrix.The dynamic evolution of the discernibility set is utilized as the policy to delete redundant features and narrows search range.The optimized feature subset is given when the iteration is stop by the stop condition given by discernibility matrix.In this paper,SVM,J48,KNN and MLP were selected as classi-fiers to evaluate the performance of the feature selection algorithm.The experimental results on the public datasets show that the average classification accuracy of the proposed algorithm is about 2% more than that of previous algorithm,and the proposed algorithm can effectively shorten the feature selection time on the data set with more features.Therefore,the proposed algorithm inherits the advantages of discernibility matrix and MRMR,and can effectively deal with feature selection problems.
Method of Weibo User Influence Calculation Integrating Users’ Own Factors and Interaction Behavior
WANG Xin-sheng,MA Shu-zhang
Computer Science. 2020, 47 (1): 96-101.  doi:10.11896/jsjkx.181202253
Abstract PDF(2085KB) ( 155 )   
References | Related Articles | Metrics
Weibo users with high-impact play an important role in commodity marketing and social publicity guidance,so mining high-impact users becomes a hot research issue in Weibo social networks.As for the problems of incomplete behavior analysis of interaction behavior and user’s own factors in calculation of micro-blog user influence,the micro-blog user influence based on user’s self-factors and interactiont computing model was proposed.This method considers the direct influence and indirect influe-nce of Weibo users.In the user’s direct influence calculation phase,the initial influence of the user is calculated by analyzing the user’s own factors such as the number of fans of Weibo users,user activity,and recent microblog quality.Then the user interaction behavior is analyzed,such as the user’s microblog visibility rate,microblog user interaction coefficient,so as to calculate the user communication ability.Finally,by combining the initial influence with the user communication ability,the user’s direct influe-nce is cakulated based on the improved PageRank algorithm.In the calculation of user indirect influence phase,through the analysis of the connection structure of the user network diagram and according to the different connection paths of non-adjacent users,the indirect impact of the user is divided into three categories:simple path,repeated path and complex path,then the user indirect influence is calculated.The experimental results show that the proposed algorithm is 14.8% and 8.3% higher than the PageRank algorithm and the MR-UIRank algorithm in terms of the user ranking accuracy.
Multi-class Imbalanced Learning Algorithm Based on Hellinger Distance and SMOTE Algorithm
DONG Ming-gang,JIANG Zhen-long,JING Chao
Computer Science. 2020, 47 (1): 102-109.  doi:10.11896/jsjkx.190600060
Abstract PDF(2396KB) ( 250 )   
References | Related Articles | Metrics
Imbalanced data is common in real life.Traditional machine learning algorithms are difficult to achieve satisfied results on imbalanced data.The synthetic minority oversampling technique (SMOTE) is an efficient method to handle this problem.However,in multi-class imbalanced data,disordered distribution of boundary sample and discontinuous class distribution become more complicated,and the synthetic samples may invade other classes area,leading to over-generalization.In order to solve this issue,considering the algorithm based on Hellinger distance decision tree has been proved to be insensitive to imbalanced data,combining with Hellinger distance and SMOTE,this paper proposed an oversampling method SMOTE with Hellinger distance (HDSMOTE).Firstly,a sampling direction selection strategy was presented based on Hellinger distances of local neighborhood area,which can guide the direction of the synthesized sample.Secondly,a sampling quality evaluation strategy based on Hellinger distance was designed to avoid the synthesized sample into other classes,which can reduce the risk of over-generalization.Finally,to demonstrate the performance of HDSMOTE,15 multi-class imbalanced data sets were preprocessed by 7 representative oversampling algorithms and HDSMOTE algorithm,and were classified with C4.5 decision tree.Precision,Recall,F-measure,G-mean and MAUC are employed as the evaluation standards.Compared with competitive oversampling methods,the experimental results show that the HDSMOTE algorithm has improved in the these evaluation standards.It is increased by 17.07% in Precision,21.74% in Recall,19.63% in F-measure,16.37% in G-mean,and 8.51% in MAUC.HDSMOTE has better classification performance than the seven representative oversampling methods on multi-class imbalanced data.
Stacked Support Vector Machine Based on Attacks on Labels of Data Samples
JIN Yao,XU Li-ya,LV Hui-lin,GU Su-hang
Computer Science. 2020, 47 (1): 110-116.  doi:10.11896/jsjkx.181001921
Abstract PDF(1551KB) ( 111 )   
References | Related Articles | Metrics
As for the adversarial data samples which indeed exist in real-world datasets,they can mislead data classifiers into correct predictions which results in poor classification.However,reasonable utilization of the adversarial data samples can distinctly improve the generalization of data classifiers.Since most of existing classifiers do not take the information about adversarial data samples into account to build corresponding classification models,a stacked support vector machine called S-SVM based on attacks on the labels of data samples which aims to obtain outperformed classification performance by learning the adversarial data samples was proposed.In a given dataset,a certain percentage of data samples are randomly chosen as adversarial data samples,in other words,the labels of these chosen data samples are substituted by the other labels included in the given dataset which are different from the original labels of the chosen data samples.Adversarial support vector machine (A-SVM) can be consequently generated by using the support vector machine (SVM) to train the given dataset which contains the adversarial data samples.The first-order gradient information on the output error of the generated A-SVM with respect to the input samples can be then computed,and the input samples will be updated by embedding the first-order gradient information into the original feature space of the input samples.Consequently,the updated data samples can be input into next A-SVM to be trained again to gradually improve the classification performance of the current A-SVM.As a result,S-SVM is formulated by stacking some A-SVMs layer by layer,the best classification results can also be obtained by the corresponding S-SVM.In terms of theoretical analysis and experimental results on UCI and KEEL real-world datasets,the mathematically computed first-order gradient information based on learning the adversarial data samples not only provide a positive relation between the outputs and the inputs of a classifier,but also indeed provide a novel way to stack the front and rear sub-classifiers in the proposed S-SVM.
Computer Graphics & Multimedia
Overview of Content-based Video Retrieval
HU Zhi-jun,XU Yong
Computer Science. 2020, 47 (1): 117-123.  doi:10.11896/jsjkx.190100231
Abstract PDF(1484KB) ( 725 )   
References | Related Articles | Metrics
Video is the medium with plenty of information,with the rise of short video APP such as vibrato,the number of videosin the network and database has increased dramatically and the method of manual labeling is no longer suitable for video retrieval.Video retrieval by extracting the spatial characteristics of video frames or temporal characteristics between frames and frames enables users to perform video search and categorization more objectively and efficiently.This paper summarized the content-based video retrieval algorithms,some classical algorithms of video retrieval,and the research and application of deep learning in content-based video retrieval.Finally,the development prospect of deep learning in video retrieval was anzlyzed.
Review of Road Extraction for High-resolution SAR Images
ZHOU Yue-yong,CHENG Jiang-hua,LIU Tong,WANG Yang,CHEN Ming-hui
Computer Science. 2020, 47 (1): 124-135.  doi:10.11896/jsjkx.190100033
Abstract PDF(2733KB) ( 203 )   
References | Related Articles | Metrics
In the field of remote sensing,road extraction for Synthetic Aperture Radar (SAR) images has high research significance and application values.Especially with the continuous development of SAR imaging technology and the gradual improvement of SAR image resolution,the researches on this subject attract much attention.However,from the current situation,the researches on road extraction for high-resolution SAR images are not systematic enough.Many road extraction methods for low-resolution SAR images are not suitable for high-resolution.Therefore,the general flow of road extraction for high-resolution SAR images was summarized,some specific methods were enumerated.At the same time,the advantages and disadvantages and application scopes were analyzed in a targeted manner,the main problems existing in the research topic were pointed out,and the development trends were forecasted.
Few-shot Food Recognition Combining Triplet Convolutional Neural Network with Relation Network
LV Yong-qiang,MIN Wei-qing,DUAN Hua,JIANG Shu-qiang
Computer Science. 2020, 47 (1): 136-143.  doi:10.11896/jsjkx.181202316
Abstract PDF(2929KB) ( 220 )   
References | Related Articles | Metrics
Food recognition attracts wide attention in the fields of food health and smart home.Most existing work focuses on food recognition with large-scale labeled samples,thus failing to robustly recognize food categories with few samples,under this condition,few-shot food recognition is an urgent problem.Most metric learning based few-shot recognition methods emphasize more on the similarity values of the image pairs without paying substantial attention to the inter-class and intra-class variations.Most works mainly use triplet convolutional neural network with linear metric function to learn the inter-class and intra-class information,however the liner metric function is not discriminative enough for measuring similarities of food images.To address this problem,this paper used the learnable relation network as non-linear metric and proposed a triplet network with relation network to solve the above two disadvantages of the few-shot learning and triplet network.This model adopts triplet network as feature embedding network for the image feature learning and uses a relation network with better discrimination as the non-linearity metric to learn the inter-class and intra-class information.Also the proposed model is trained end-to-end.In addition,this paper proposed an on-line mining rule for triplet samples,which makes the model stable in the training stage.The comprehensive experi-mental was conducted on three food datasets,which are Food-101,VIREO Food-172 and ChineseFoodNet.Compared with popular few-shot learning methods,such as Relation network,Matching network,the proposed model achieves an average improvement of about 3.0%,and compared with triplet network with liner metric,it achieves an average improvement of about 1.0%.Also this paper explored the influence of the margin in the loss function,parameters setting of online triplet sampling and initialization methods on experiment performance.
Automatic Detection Algorithm of Pharyngeal Fricative in Cleft Palate Speech Based on Multi-delay Fourth-order Cumulant Octave Spectral Line
HE Fei,MENG Yu-xuan,TIAN Wei-wei,WANG Xi-yue,HE Ling,YIN Heng
Computer Science. 2020, 47 (1): 144-152.  doi:10.11896/jsjkx.180701349
Abstract PDF(2133KB) ( 160 )   
References | Related Articles | Metrics
In order to realize the automatic classification and detection of palate pharyngeal fricative and normal speech, an automatic pharyngeal fricative detection algorithm based on multi-delay fourth-order cumulant one-third octave spectral line (FTSL) was proposed by studying the pronunciation characteristics of cleft palate patients with pharyngeal fricative.Currently,most researches involved with the detection of pharyngeal fricatives are based on the length of consonants and the energy distribution of speech in frequency-domain.There exist few researches which have achieved automatic classification of pharyngeal fricatives and normal speech.This experiment is based on the pronunciation characteristics of pharyngeal fricative.Each frame’s multi-delay fourth-ordercumulant is computed,and then one-third octave is used to extract the FTSL.Automatic classification of pharyngeal fricative and normal speech is realized by FTSL.In this experiment,the FTSL of 200 normal consonants and 194 consonants of pharyngeal fricative are extracted,and the SVM classifier is used to classify.Besides,comparative experiments were conducted on FTSL feature and traditional acoustic features,and the results were fully analyzed and discussed in this paper.The experimental results show that the proposed FTSL has an accurate rate of 92.7% for the automatic classification of pharyngeal speeches,and it has excellent performance and can provide an effective,objective and non-invasive auxiliary basis for clinical pharyngeal state assessment.
Image Fusion Method Based on Image Energy Adjustment
LI Xiao-yu,GAO Qing-wei,LU Yi-xiang,SUN Dong
Computer Science. 2020, 47 (1): 153-158.  doi:10.11896/jsjkx.181202437
Abstract PDF(4248KB) ( 222 )   
References | Related Articles | Metrics
Aiming at the problem that traditional image fusion algorithm can’t achieve good fusion effect on the image with large energy difference,this paper decomposed the high and low frequency signals of the two images by the combination of multi-scale transformation and sparse representation according to the energy division of the image.The sparse fusion rules of different energy image blocks are adjusted,and the consistency test is added in the high frequency part to further constrain the composite process of the MSD coefficients corresponding to the local spatial energy.Finally,the fused image is reconstructed by wavelet inverse transform.Infrared images,medical images and multi-focus images are used to verify its performance,the effects of sparse decomposition layer number and window step size on the fusion effect are analyzed,the optimal decomposition method under the framework is obtained,and then the fusion image with excellent subjective results and objective inclications are obtained.The experimental results show that the proposed algorithm can achieve better fusion effect when the image is obtained by any two types of sensors,and is not limited to the fusion of two images.It is superior to the traditional indicators such as SF,SSIM and EFQI of fusion algorithm and general multi-scale algorithm combined with sparse representation.
Environment-assisted Multi-task Learning for Polyphonic Acoustic Event Detection
GAO Li-jian,MAO Qi-rong
Computer Science. 2020, 47 (1): 159-164.  doi:10.11896/jsjkx.190200365
Abstract PDF(1842KB) ( 100 )   
References | Related Articles | Metrics
Polyphonic Acoustic Event Detection (AED) is a challenging task as the sounds are mixed with the signals from diffe-rent events,and the overall features extracted from the mixture can not represent each event well,leading to suboptimal AED performance especially when the number of sound events increases or environment changes.Existing methods do not consider the impact of environmental changes on detection performance.Therefore,an Environment-Assisted Multi-Task learning (EAMT) method for AED was proposed.EAMT model mainly consists of two core parts:environment classifier and sound event detector,where the environment classifier is used to learn environment context features.As additional information of event detection,the environment context features are fused with sound event features to assist sound event detection by muli-task learning,so as to improve the robustness of EAMT model to environmental changes and the performance of polyphonic event detection.Based on Freesound dataset,one of the mainstream open data set in the field of AED,and general performance evaluation metrics F1 score,three sets of comparative experiments were set up to compare the proposed method with DNN(baseline) and CRNN,which is one of the most popular methods.The experimental results show that:compared with the single task model,EAMT model improves the performance of environment classification and event detection,and the introduction of environment context features further improves the performance of acoustic event detection.EAMT model has stronger robustness than DNN and CRNN as the F1 score of EAMT is 2% to 5% higher than other models when environment changes.When the number of target events increases,EAMT model still performs prominently,and compared with other models,EAMT model achieves an improvement of about 2% to 10% in F1 score.
PET Image Reconstruction Based on Unbiased Linear Optimal Estimation
WANG Hong-xia,XU Ying-jie,ZHAO Yun-bo,ZHANG Wen-an
Computer Science. 2020, 47 (1): 165-169.  doi:10.11896/jsjkx.181202329
Abstract PDF(2610KB) ( 227 )   
References | Related Articles | Metrics
Positron Emission Tomography (PET) plays an important role in qualitative diagnosis and metastasis of tumors.Therefore,it is very necessary to improve imaging quality of PET.However,most of the existing reconstruction algorithms rely heavily on the linear model of PET.Considering that PET is affected by many physical factors,such as detector efficiency,geometric size of detection system,attenuation of gamma photons by biological tissues and scattering effects,the linear model cannot match the nonlinear relationship between tracer concentration and sinogram.This paper proposed a new observation model to characterize the complicated relationship between the tracer concentration and sinogram by introducing an unknown input term.This term consists of two parts.One is a coefficient matrix,which further describes the linear part of the projection; the other is an unknown input,which characterizes the nonlinear relation ship between the tracer concentration and the sinogram.Based on the new model,the PET image reconstruction is reformulated as a linear unbiased optimal estimation.Then,a linear and recursive relation with an unknown estimation gain is introduced,the difficulty induced by the unknown input term is solved by projecting sinogram onto the null space of the coefficient matrix of unknown input.Based on the design idea of Kalman filter,the estimation gain is derived.Finally,the Expectation-Maximization reconstruction (EM),the Kernel-based EM algorithm (KEM) and the Kalman Filtering method (KF) are compared with the proposed algorithm by calculating Mean Square Error (MSE) and Signal-Noise-Rate (SNR).The experiment results show that the proposed algorithm has larger SNR,smaller MSE as well as more clear reconstruct image,and reconstructs the size and shape of the tumor better than the others.Hence,the proposed algorithm of reconstruction has better quality to the others.
Hyperspectral Images Denoising Based on Non-local Similarity Joint Low-rank Representation
Computer Science. 2020, 47 (1): 170-175.  doi:10.11896/jsjkx.181202337
Abstract PDF(4064KB) ( 194 )   
References | Related Articles | Metrics
The acquisition of hyperspectral images (HSI) is often interfered by multiple types of noise,which will directly affect accuracy in the subsequent applications.Therefore,HSI denoising is a very important pretreatment process.The low-rank representation (LRR) model can well satisfy the spectral properties of HSI.However,the choice of dictionary under this framework is particularly significant,which is still an open question at present.Meanwhile,the typical method can’t satisfy the requirement well by only considering the local correlation of the image,and the non-local similarity is equally of significance.Based on LRR,a new method of HSI denoising was proposed.Firstly,the type of noise is considered comprehensively and the dictionary with more comprehensive discrimination ability is selected.Secondly,on the premise of block processing,non-local similar information is introduced through clustering,and similar blocks are combined for LRR framework.The experimental results on the simulated In-dian Pines and real EO-1 Hyperion data set show that the proposed method performs better than the state-of-art HSI denosing methods both in the visual effect of the image and the quantitative evaluation index of the simulated data set.
Image Denoising Algorithm Based on Adaptive Matching Pursuit
LI Gui-hui,LI Jin-jiang,FAN Hui
Computer Science. 2020, 47 (1): 176-185.  doi:10.11896/jsjkx.181202280
Abstract PDF(4198KB) ( 132 )   
References | Related Articles | Metrics
Aiming at the problem that the current sparse denoising algorithm has low decomposition efficiency and unsatisfactory denoising effect,an image denoising algorithm based on adaptive matching pursuit was proposed.Firstly,the algorithm uses the adaptive matching pursuit algorithm to solve the sparse coefficients,and then uses the K-means singular value decomposition algorithm to train the dictionary into an adaptive dictionary that can effectively reflect the image structure features.Finally,theima-ge is reconstructed by combining the sparse coefficient with the adaptive dictionary.During the reconstruction process,the coefficients corresponding to the noise are removed,and finally the denoising effect is achieved.Spike-Slab priori is introduced to guide the sparsity of sparse coefficient matrix,and two weight matrices are used to make the denoising model more realistic.In view of the importance of dictionary in sparse algorithm,this paper compared adaptive dictionary with DCT redundant dictionary and Global dictionary.The experimental results show that the denoising result of adaptive dictionary is about 4.5 dB higher than that of traditional dictionary in terms of peak signal-to-noise ratio (PSNR).The proposed method improves three evaluation indicators in varying degrees compared with the current six main methods of sparse denoising.The PSNR is increased by about 0.76dB to 6.24 dB,the feature similarity (FSIM) is increased by about 0.012 to 0.082,and the structure similarity (SSIM) is increased by about 0.015 to 0.108 on average.The qualitative evaluation of the image denoising algorithm shows that the proposed algorithm retains more useful information and has the best visual effect.Therefore,the experiment fully proves its effectiveness and robustness.
Artificial Intelligence
Comment Sentiment Analysis and Sentiment Words Detection Based on Attention Mechanism
LI Yuan,LI Zhi-xing,TENG Lei,WANG Hua-ming,WANG Guo-yin
Computer Science. 2020, 47 (1): 186-192.  doi:10.11896/jsjkx.181002011
Abstract PDF(1731KB) ( 254 )   
References | Related Articles | Metrics
Comment sentiment analysis is one of the research hotspots in user generated content field.Because of the diversity of comment objects and the casualness of commentators’ language,comment sentiment analysis has become a challenging issue.The existing methods mainly calculate the emotional polarity of comments by pre-building the emotional vocabulary.However,these methods cannot adapt to the problem that the same words have different emotional polarities in different contexts.To overcome this problem,the attention based convolutional-recurrent neural network (A-CRNN) model was proposed to model the emotional polarity of comments and words in different contexts.By combining the context of words in sentences,the proposed method can focus attention on a small scale around the main emotional words.The A-CRNN model calculates the emotional polarity of the words through an adaptive method,which improves the accuracy of words’ emotional polarity judgment and the accuracy of short texts’ emotional polarity.Compared with CRNN,CNN and emotional dictionary methods,the proposed method achieves better results in Chinese dataset induding Meituan Review,Party Building Review and English dataset including Amazon Product Review.
Chinese Short Text Keyphrase Extraction Model Based on Attention
YANG Dan-hao,WU Yue-xin,FAN Chun-xiao
Computer Science. 2020, 47 (1): 193-198.  doi:10.11896/jsjkx.181202261
Abstract PDF(1568KB) ( 1018 )   
References | Related Articles | Metrics
Keyphrase extraction technology is a research hotspot in the field of natural language processing.In the current keyphrase extraction algorithm,the deep learning method seldom takes into account the characteristics of Chinese,the information of Chinese character granularity is not fully utilized,and the extraction effect of Chinese short text keyworks still has a large improvement space.In order to improve the effect of the keyphrase extraction for short text,a model for automatic keyphrase extraction abstracts was proposed,namely BAST model,which combines the bidirectional long short-term memory and attention mechanism based on sequence tagging model.Firstly, word vectors in the word granularity and character vectors in the character granularity are used to represent input text information.Secondly,the BAST model is trained,text features are extracted by using BiLSTM and attention mechanism,and the label of each word is classified.Finally,the character vector model is used to correct the extraction results of the word vector model.The experimental results show that the F1-measure of the BAST model reaches 66.93% on 8159 abstract data,which is 2.08% higher than that of the BiLSTM-CRF(Bidirectional Long Shoft-Term Memory and Conditional Random Field) algorithm,and is further improved than other traditional keyphrase extraction algorithms.The innovation of the model lies in the combination of the extraction results of the word vector and the character vector model.The model makes full use of the characteristics of the Chinese text information and can effectively extract keyphrases from the short text,and extraction effect is further improved.
Comprehensive Calculation of Semantic Similarity of Ontology Concept Based on SA-BP Algorithm
XU Fei-xiang,YE Xia,LI Lin-lin,CAO Jun-bo,WANG Xin
Computer Science. 2020, 47 (1): 199-204.  doi:10.11896/jsjkx.181202351
Abstract PDF(1583KB) ( 225 )   
References | Related Articles | Metrics
There are heterogeneous problems in indicators that are established by different combat forces when evaluating and testing command information systems,which leads to great difficulties in information interaction and data sharing.In order to achieve mapping and integration of indicator’s ontology-concept,building a unified global indicator ontology tree is an effective solution.In this case,the accuracy of similarity calculation for ontology-concept becomes crucial.Aiming at the problem of low accuracy in the existing ontology-concept similarity calculation model,a comprehensive simi-larity calculation model based on BP neural network algorithm which is improved by Simulated Annealing (SA-BP),was proposed.This paper first improved the classical similarity calculation models based on semantic distance,information content and conceptual attribute.Besides,a similarity calculation model in view of concept’s sub-node coincidence was proposed in order to avoid the subjectivity of artificially determined weights and the inaccuracy of simple linear weighting in existing models.At last,a training test on the comprehensive similarity calculation model was performed,while the sample data were extracted from ontology-concept of variable evaluation indicators that come from command information systems established by different departments of combat forces.Experimental data show that compared with PSO-BP calculation model and principal-component linear weighted calculation model,the comprehensive similarity calculation model based on SA-BP algorithm achieves strong correlation,since its results and its Pearson correlation coefficient of the results evaluated by experts are increased by 0.0695 and 0.1351 respectively.The experimental results verify that,after training,SA-BP algorithm can converge better and achieve higher accurate when calculating ontology-concept similarity.Hence,key issues of integration for ontology-concept can be effectively solved.
Intention Detection in Spoken Language Based on Context Information
XU Yang,WANG Jian-cheng,LIU Qi-yuan,LI Shou-shan
Computer Science. 2020, 47 (1): 205-211.  doi:10.11896/jsjkx.181202269
Abstract PDF(2063KB) ( 182 )   
References | Related Articles | Metrics
In recent years,with the development of artificial intelligence and the popularization of smart devices,human-computer intelligent dialogue technology has received extensive attention.Spoken language understanding is an important task dialogue system,and spoken language intention detection is a key technology in spoken language understanding.Due to complex language phenomena such as semantic missing,frame representation and intent conversion in multiple rounds of dialogue,the intent detection task for spoken language is very challenging.In order to solve the above problems,a gated mechanism based information sharing neural network method was proposed in this paper,which can take advantages of contextual information in multiple rounds of dialogue to improve detection performance.Specifically,first the current round text and context text initial representation are constructed in combination with the phonetic features to reduce the impact of speech recognition errors on semantic representation.Secondly,a semantic encoder based on hierarchical attention mechanism is used to obtain deep semantic representations of the current round and contextual text,including multi-level semantic information from word to sentence to multiple rounds of text.Finally,the gated mechaniam based information sharing neural network is constructed to use the context semantic information to help the intent detection of the current round of text.The experimental results show that the proposed method can effectively use context information to improve the detection of spoken language intentions,and achieves 88.1% accuracy and 88.0% F1 value in dataset of CCKS2018 shared task-2,which is significantly improved performance compared with the existing methods.
Time-variant Neurocomputing with Finite-value Terminal Recurrent Neural Networks
SUN Ming-xuan,WENG Ding-en,ZHANG Yu
Computer Science. 2020, 47 (1): 212-218.  doi:10.11896/jsjkx.181001898
Abstract PDF(2092KB) ( 121 )   
References | Related Articles | Metrics
Conventional computing methods,by using recurrent neural networks,ensure the asymptotic convergence of the computing error such that the error converges to zero and the exact solution can be obtained as time approaches infinity.In this paper,a novel model of terminal recurrent neural networks was presented to address online computation problems arising from time-varying matrices.Such kind of network model is of the characteristics of the limited values of the right-hand side function and the finite settling time.Firstly,the shortcoming of asymptotically convergent network models in solving time-varying computational problems is analyzed,and the necessity of introducing the terminal network models is given.Then,the dynamics of the terminal network is characterized with the derivation for the expression of the settling time.For solving the problems of inverse and genera-lized inverses of time-varying matrices,an error function is defined,a terminal recurrent neural network is constructed based on the error function,so that the accurate solution can be achieved.For the path planning of industrial manipulators,the end effector tracks the closed trajectory by applying the terminal neural network,the joint angle returns to the initial position,and the repetitive motion is conducted in the presence of arbitrary initial position.MATLAB/SIMULINK is used for simulation of solving time-varying matrix computing problems and trajectory planning tasks of manipulators.By comparing the results obtained by the asymptotic network and the terminal network,it can be seen that the computing process using the terminal network converges in finite time and the computing accuracy is improved significantly.The presented solutions for different time-varying computing problems exhibit the applicability of the proposed terminal networks.
Improved Cuckoo Search Algorithm for Function Optimization Problems
LI Yu,SHANG Zhi-yong,LIU Jing-sen
Computer Science. 2020, 47 (1): 219-230.  doi:10.11896/jsjkx.181102165
Abstract PDF(3966KB) ( 239 )   
References | Related Articles | Metrics
In engineering optimization,most problems are continuous optimization problems,that is function optimization problems.Aiming at the problems of slow convergence speed,low precision and easy to fall into local optimization in the later stage of Cuckoo algorithm,this paper proposed an improved Cuckoo search algorithm based on the logarithmic decline of nonlinear inertial weights and the random adjustment discovery probability.Firstly,in the update formula of the path and position of the cuckoo homing nest,a update method that inertia weight decreases nonlinearly with the number of evolutionary iterations is designed to improve the nest location and coordinate the abilities of exploration and exploitation.Secondly,the discovery probability with random adjustment is introduced to replace the discovery probability of fixed value to make the larger and smaller discovery probabi-lity appear randomly,which is beneficial to balancing the global exploration and local exploitation of the algorithm,accelerating the convergence speed,and increasing the diversity of the population.Finally,the logarithmic decreasing parameter and stochastic adjustment discovery probability are analyzed and tested,and the optimal parameter combination of logarithmic decrement and the optimal range of stochastic adjustment discovery probability are selected.At this time,the optimization effect of the function is the best.Compared with other evolutionary algorithms (BA,CS,PSO,ICS),DWCS greatly improves the precision of optimization,significantly reduces the number of iterations,and effectively improves the convergence speed and robustness.In 16 test functions,DWCS can converge to the global optimal solution,which proves that DWCS has a strong competitive power in solving the optimization problem of continuous complex functions.
Using SVM Method Optimized by Improved Particle Swarm Optimization to Analyze Emotion of Chinese Text
WANG Li-zhi,MU Xiao-dong,LIU Hong-lan
Computer Science. 2020, 47 (1): 231-236.  doi:10.11896/jsjkx.181102130
Abstract PDF(1755KB) ( 162 )   
References | Related Articles | Metrics
In recent years,with the increasing number of network users,the number of user comments has also increased explosively,accompanied by a large number of information that can be used for reference and deep excavation.Text sentiment classification arises at this historic moment,the prediction accuracy and the execution speed of classification model are the keys to mea-sure the quality of the model.Traditional algorithm by using SVM for text sentiment classification is simple and easy to implement,and its model parameters determine the classification accuracy.In this case,this paper combined the improved particle swarm optimization algorithm with the SVM classification method,used the SVM method optimized by improved particle swarm optimization to analyze the emotion of the movie and TV drama review.Firstly,Douban movie review data are obtained by internet crawler.Then the text information is vectorized by weighted word2vec after pre-processing,which becomes the recognizable input of support vector machine.Adaptive inertia decreasing strategy and crossover operator are used to improve particle swarm optimization algorithm.The loss function,penalty parameter and kernel parameter of SVM model are optimized by improved PSO.Finally,the text is classified by this model.Experimental results on the same data show that this method effectively avoids the shortcomings of traditional affective dictionary method affected by word order and different contexts,and solves the problem of gradient disappearance or dispersion caused by convolution.It also overcomes the possibility that PSO itself is easily trapped in local optimum.Compared with other methods,the proposed classification model performs faster and improves classification accuracy effectively.
Computer Network
Survey of SDN Applications in Vehicular Networks
GU Xiao-hui,ZHANG Guo-an
Computer Science. 2020, 47 (1): 237-244.  doi:10.11896/jsjkx.190100178
Abstract PDF(1660KB) ( 292 )   
References | Related Articles | Metrics
As vehicle applications,mobile devices and the Internet of Things (IoT) have been developing rapidly,building an efficient architecture to deal with big data in vehicular networks has become an important concern for the future smart city.Howe-ver,the complex and inflexible architecture of vehicular networks faces a set of challenges such as high mobility,intermittent connectivity,heterogeneity of applications.In this context,software defined network (SDN),with the programmable and flexible network architecture,has recently been gaining great attentions from research communities,businesses,and industries in wired network managements and heterogeneous wireless communications.Applying SDN to Vehicular Networks can significantly improve its flexibility,reliability,programmability and scalability,enhance the capacity of vehicular Networks in providing applications and services,and further improve the quality of experience of users.Firstly,the SDN framework was described.Secondly,the research progress of the software defined vehicular network (SDVN) was summarized from two perspectives:architectures and data dissemination.Then the current research state of SDVN combined with mobile edge computing (MEC) was surveyed.After that,exi-sting problems and challenges faced by SDVN were discussed.Finally,several SDVN application prospects were introduced.
Analysis of GNSS Signal Code Tracking Accuracy Under Gauss Interference
YE Lv-yang,FAN Zhan-you,ZHANG Han-qing,LIU Yan,WU Wen-jun,HU Yong-hui
Computer Science. 2020, 47 (1): 245-251.  doi:10.11896/jsjkx.190100193
Abstract PDF(4843KB) ( 123 )   
References | Related Articles | Metrics
Code tracking accuracy is an important parameter for the compatibility interoperability evaluation of navigation systems.In order to quantitatively analyze the code tracking accuracy of GNSS signals under Gaussian interference,starting from common Gaussian interference signals,the code tracking accuracy of GNSS signals was simulated and analyzed according to the MATLAB software,the CT-SSC expression of NELP and DP loop were given,the CT-SSC and Cramer-Rao lower bounds of the loop model were analyzed meanwhile.The simulation results show that the code tracking error of GNSS signals is more obvious by Gaussian-narrowband interference and wideband interference under the same conditions,while code tracking error of GNSS signal is more stable under Gauss matching spectrum interference and band-limited white interference in a certain signal-to-interference ratio range.Under the model of DP,the three-dimensional surface of GNSS signal is more “smooth” than CELP and NELP model in terms of CT-SSC,and the tracking performance is best.The tracking performance analysis of GNSS signals can provide an important reference for GNSS system on compatibility and interoperability assessment as well as modern GNSS recei-ver design,and the expressions of CT-SSC about NELP and DP model can be analysis parallel with CELP model.
Channel Allocation and Power Control Algorithm for Cognitive Satellite Networks Based on Cooperative Game Theory
ZHONG Xu-dong,HE Yuan-zhi,REN Bao-quan,DONG Fei-hong
Computer Science. 2020, 47 (1): 252-257.  doi:10.11896/jsjkx.181202352
Abstract PDF(2195KB) ( 107 )   
References | Related Articles | Metrics
With continuous increasing of communication service requirements,satellite networks and terrestrial networks are both facing a serious spectrum crisis because of the limitation of spectrum resource.Cognitive radio technology makes it possible torea-lize the resource sharing for network utility improvement between satellite networks and terrestrial networks.This paper investigated the power control and channel allocation problem for cognitive satellite networks,where satellite users cognitively access the same spectrum resource allocated to terrestrial networks as primary users.A reasonable system model is constructed based on the characteristics of satellite networks and terrestrial networks,and the outage probability threshold is used to represent the effect on system capacity of channel estimation error.To protect the communication performance of primary base station,the optimization function is designed based on bargaining game theory by taking into account with channel estimation errors,constrain of channel resource,maximum transmit power of cognitive satellite users and interference constrains of primary base stations.In this paper,the closed from solutions of optimal transmit power and channel allocation for the problem are derived based on convex optimization theory,and a dual iteration algorithm is designed to find the solutions.Finally,the system parameters are set based on characteristics of satellite networks,and several simulations are obtained for the proposed algorithm with Matlab simulation platform based on the parameters.The simulation results show that the proposed algorithm has a proper convergence performance under different arrival rates.It also shows that the channel estimation error can decrease the capacity performance of the network.Compared with existing methods,the proposed algorithm can improve the capacity performance with more than 50bps/Hz than the proportional fairness method when the number of beams is more than 15,and the fairness performance is more than double of the capacity maximizing method under the same condition.Therefore,the proposed algorithm can find a reasonable trade-off between system capacity and fairness among users.
Low-delay and Low-power WSN Clustering Algorithm Based on LEACH XIONG
Cheng-biao,DING Hong-wei,DONG Fa-zhi,YANG Zhi-jun, BAO Li-yong
Computer Science. 2020, 47 (1): 258-264.  doi:10.11896/jsjkx.190100060
Abstract PDF(2420KB) ( 227 )   
References | Related Articles | Metrics
Aiming at the shortcomings of the classical clustering LEACH protocol,an improved algorithm with low latency,low power consumption and network energy balance was proposed.The algorithm improves the LEACH in two aspects.Firstly,the CSMA mechanism is adopted in the stable data transmission phase to reduce the data transmission delay.Secondly,in terms ofene-rgy balance and energy consumption,the strategy is mixed into a small number of advanced nodes with high initial energy,and in the election of the cluster,the remaining energy and average energy of the node are considered comprehensively,which prolongs the network lifetime.In this paper,the LEACH protocol is introduced briefly.The average periodic method is used to analyze the CSMA mechanism in LEACH,and the delay calculation method of the improved algorithm is obtained.Then the energy consumption of the data transmission stage and algorithm complexity of the improved algorithm are analyzed.The calculation of cluster head election threshold of the improved algorithm is discussed.Finally the delay and power consumption of the data transmission stage of the improved algorithm are analyzed,and MATLAB is used to simulate and compare.The simulation results show that the improved algorithm’s first node dead time is prolonged by 31%,the dead time of all nodes is prolonged by 24.7%,and the network energy consumption is more uniform,which can effectively solve the hot zone problem in LEACH and the problem of missing regional information caused by node-based death in actual WSN applications.Compared with LEACH data transmission delay,the improved algorithm is reduced by 78.6% on average,ensuring the real-time performance of data in WSN applications.It proves that the performance of the improved algorithm is improved in terms of delay,lifetime,energy consumption uniformity and throughput.
Study of TASEP Model Based on Road Networks
RUAN Zi-rui,RUAN Zhong-yuan,SHEN Guo-jiang
Computer Science. 2020, 47 (1): 265-269.  doi:10.11896/jsjkx.181202418
Abstract PDF(2160KB) ( 83 )   
References | Related Articles | Metrics
TASEP is a classic model for describing the particle transportation on one-dimension lattices,which considers the vo-lume exclusion effect of real matters.It has been widely applied in the area of biology and public transportation.In this paper,based on the real properties of the traffic network,a modified TASEP model was proposed.The TASEP model is improved as follows,considering the heterogeneity of the hopping rate of particles on each edge,i.e.setting different hopping rates on each edge and conforming to Poisson distribution,and considering that the particles at the intersection are non-random in choosing the next section.Specifically,a real-time path strategy was proposed.Combining the traffic flow and the number of particles on a road,an average moving velocity is obtained for each link.Then a parameter α is introduced to make the movements of the particles at the intersections more rational.The larger the value of α,the more likely the particles are to move to the edge of the larger average velocity.Experimental results show that with the increase of α,the flow of the system will be greatly improved,which alleviates the congestions to a certain extent.By extending the traditional TASEP model,this paper provides a new insight and direction for the study of urban traffic system.
Web Service Composition by Combining FAHP and Graphplan
FAN Guo-dong,ZHU Ming,LI Jing,CUI Xiao-liu
Computer Science. 2020, 47 (1): 270-275.  doi:10.11896/jsjkx.181102228
Abstract PDF(1656KB) ( 125 )   
References | Related Articles | Metrics
In recent years,with the advance of cloud computing,more and more services have been published online.How to search an optimal composition with both functional and non-functional requirements has become a challenging problem.QoS-aware web service composition is an NP-hard problem.To solve this problem,a system combining FAHP with improved Graphplan algorithm was proposed.Firstly,the overall QoS of service is generated by using FAHP according to user preferences.Se-condly,in the forward expand stage of Graphplan,dynamic threshold is used to prune less competitive services,which reduces time complexity while ensuring that critical services are retained.Finally,in the backward searching stage of Graphplan,service with best overall QoS is selected into the composition,under the premise of meeting the functional requirements.Experimental results show that the proposed algorithm not only improves the quality of service composition,but also reduces the program running time significantly compared with the ordinary Graphplan,Skyline and other methods.
Data Abnormality Processing in Wireless Sensor Networks Based on Distributed Compressed Sensing
HOU Ming-xing,QI Hui,HUANG Bin-ke
Computer Science. 2020, 47 (1): 276-280.  doi:10.11896/jsjkx.180901667
Abstract PDF(1537KB) ( 110 )   
References | Related Articles | Metrics
In wireless sensor networks,massive data acquisition,transmission and processing not only pose severe challenges to the processing ability and power consumption of sensors,but also suffer data anomalies frequently due to sensor failures or sudden changes of environmental factors,which cannot be effectively dealt with by traditional data processing methods.Regarding the problems above,this paper proposed two kinds of data abnormality models and corresponding processing method based on distri-buted compressed sensing (DSC) for wireless sensor networks.When the data collected by multiple sensors contains abnormal components,the DCS reconstructs the same normal component of data only once and the different abnormal components individual-ly,which avoids the repeated processing of the same normal component and improves the processing efficiency of the data containing abnormal components.In addition,DCS makes full use of the correlation of data,which can effectively reduce the amount of data acquisition and enhance the robustness against data anomalies.Numerical simulation results of two kinds of data abnormality models show that compared with the traditional compressed sensing based on single set of measurement,the data processing method based on DCS improves the accuracy of abnormal data reconstruction and reduces the amount of data by about 33%,which proves the effectiveness of the proposed method.
Information Security
Advanced Persistent Threat Detection Based on Generative Adversarial Networks and Long Short-term Memory
LIU Hai-bo,WU Tian-bo,SHEN Jing,SHI Chang-ting
Computer Science. 2020, 47 (1): 281-286.  doi:10.11896/jsjkx.181102103
Abstract PDF(1508KB) ( 267 )   
References | Related Articles | Metrics
Advanced persistent threat (APT) brings more and more serious harm.Traditional APT detection methods have a lower accuracy when the attack data samples are fewer and the attack duration is longer.To solve this problem,an ATP attack detection method based on generative adversarial networks (GAN) and long short-term memory (LSTM) was proposed.On the one hand,this method generates attack data based on GAN simulation,generates a large number of attack samples for discriminant model,and improves the accuracy of the model.On the other hand,the memory unit and gate structure based on LSTM modelguarantee the feature memory among the sequence fragments which have correlation and large time interval in APT attack sequence.Keras open source framework was used to construct and train the model,and Accuracy,FPR,ROC curve were used as metric to compare,test and analyze the methods of attack data generation and APT attack sequence detection.By generating simulated attack data and optimizing the discriminant model,the accuracy of the original discriminant model is improved by 2.84%,and the accuracy of APT attack sequence detection is improved by 0.99% comparing with the recurrent neural network (RNN) model.The experimental results fully show that APT attack detection algorithm based on GAN-LSTM can improve the accuracy of discriminant model and reduce false alarm rate by introducing generative model to increase sample size,and the detection of APT attack sequence using LSTM model has better accuracy and lower false alarm rate than other temporal structures,which shows the feasibility and validity of the proposed method.
Classification and Evaluation of Mobile Application Network Behavior Based on Deep Forest and CWGAN-GP
JIANG Peng-fei, WEI Song-jie
Computer Science. 2020, 47 (1): 287-292.  doi:10.11896/jsjkx.181102118
Abstract PDF(1774KB) ( 147 )   
References | Related Articles | Metrics
In view of the problems that the large number and complex functions of mobile applications,and mixed with a variety of malicious applications,this paper analyzed the network behavior of applications for Android platform,and designed reasonable network behavior trigger events for different types of applications to simulate network interaction behavior.Based on the network event behavior sequence,the improved deep forest model is used to classify and identify applications.The optimal classification accuracy can reach 99.03%,and it has high accuracy,high recall rate,high F1-Score and low training time.In addition,in order to solve the problems of limited number of application samples and high time cost of data acquisition,a data enhancement method using CWGAN-GP was proposed.Compared with the original generative adversarial network,the training of the model is more stable,and the data of specified categories can be generated by only one training.The experimental results show that the classification accuracy is improved by about 9% after joining the generated data to train the deep forest model together.
Mobile Secure Payment Scheme Using Identity-based Cryptographic Algorithm+SMS Verification Code
LIU Ya-qiang,LI Xiao-yu
Computer Science. 2020, 47 (1): 293-301.  doi:10.11896/jsjkx.181202414
Abstract PDF(1883KB) ( 190 )   
References | Related Articles | Metrics
Aiming at the problem of stolen funds caused by stolen SMS verification code in mobile payment process,as well as the mobile device and the mobile network are under great pressure when establishing a mobile payment system under the certificate-based cryptosystem,a mobile secure payment scheme based on identity-based cryptographic algorithm+SMS verification code was proposed.In this scheme,users and bank servers join an identity-based cryptosystem,so they no longer need digital certificate-based identity authentication,which will greatly reduce the storage and computational overhead of mobile devices and mobile networks.Users need to go to the bank counter to register and open mobile banking services,set the user name,password and reserved security issues,and complete the first installation and initialization of mobile banking APP with the help of bank staff.When logging in,the bank serverauthenticates the user’s identity to ensure that the user is legal.In payment,the user’s private key is used to generate the digital signature of SMS verification code,and the combination of digital signature and SMS verification code is encrypted with the bank server’s public key and sent to the bank server for verification,the bank server will not allow the user to pay until the verification is passed.In this scheme,the SMS verification code and the digital signature will jointly provide security guarantee for the user.Even if the verification code is leaked,the attacker cannot generate a digital signature accor-ding to the verification code,thus ensuring the security of the mobile payment.Theoretical analysis and experimental results show that this scheme not only can greatly improve the security of mobile payment,but also the average response time of the system will not increase sharply with the increase of mobile terminals,so it has better robustness and feasibility.
Automatic Error Correction CRO PUF Key Generation Scheme
ZHANG Xiang-yang,SUN Zi-wen
Computer Science. 2020, 47 (1): 302-308.  doi:10.11896/jsjkx.181202390
Abstract PDF(1815KB) ( 96 )   
References | Related Articles | Metrics
For the encryption technology in the radio frequency identification security problem,this paper designed an automatic error correction CRO PUF key generation scheme.The error correction idea of the repeated code in the digital communication system is applied to the Configurable Ring-oscillator Physical Unclonable Function structure,and a 3-bit output value is obtained by performing a differential operation on the final oscillation frequency of adjacent CRO.The output response value is subjected to error correction processing to obtain an automatic error correction CRO PUF output information,thereby realizing automatic error correction of the CRO PUF circuit.The error correction characteristics of the error correction code encoding and decoding technology in the registration phase and the reconstruction phase of the fuzzy extractor are used to correct the bit hopping error of the reproduced output information vector,and the error-corrected vector encrypted by PUF reproduction output information vector uses the Hash module to generate a secret key.In the Linux system,the Monte Carlo simulation of the automatic error correction CRO PUF circuit is carried out by using the TSMCO 0.18um,1.8V CMOS0 process library in the spectrum environment of Cadence virtuoso,and the output information vector of the PUF circuit is reproduced by using MATLAB for fuzzy extractor processing.The simulation results show that the highest reliability of the automatic error correction CRO PUF circuit under the influence of power supply voltage is 98.96%,the lowest reliability is 92.71%.The highest reliability under the influence of temperature is 93.75%,the lowest reliability is 84.90%.The uniformity of the automatic error correction CRO PUF is significantly improved compared with the CRO PUF.As a whole,the uniqueness of the automatic error correction CRO PUF and CRO PUF is not in a distinct advantage or disadvantage,but the automatic error correction CRO PUF has a better approximation effect with the standard value of 50% after the variance calculation being performed on the two groups of data.The reliability of the PUF reproduced output response vector processed by the fuzzy extractor is further improved,up to 99.8%,which is almost immune to environmental factors and can be directly used as a key.
Special Digital Signature Scheme Based on Identity Identification and Its Application
ZUO Li-ming,CHEN Lan-lan
Computer Science. 2020, 47 (1): 309-314.  doi:10.11896/jsjkx.181202416
Abstract PDF(2038KB) ( 140 )   
References | Related Articles | Metrics
There is a key escrow problem in the traditional identity-based cryptography,when the private key generator has a security problem,it is easytocausethe whole cryptosystem to paralyse,so solving the key escrow problem has always been a hot topic in cryptography research.In view of the problem,a special digital signature scheme based on identity identification was proposed in paper,which did not require the intervention of trusted third party.Firstly,under random oracle model and the assumption of CDH (computational Diffie-Hellman),security of this scheme is proved.Then,the theoretical performance of several identity-based digital signatures is compared and analyzed.Finally,signature scheme is implemented with C language based on PBClibrary,and the actual operation efficiency of several signature schemes is analyzed.Experimental results show that the ave-rage total time of the proposed scheme is about 0.148s,which is approximately 11.9% and 13.5%lower than Subhas and Neetu schemes,which are close to the time of Shamir and Boneh schemes.So,proposed scheme has lower computational complexity and higher efficiency,and is suitable for application scenarios with high data protection requirements,such as transport monitoring of dangerous goods.
Secure Transmission Scheme for Environmental Monitoring Data Based on Blockchain
ZHOU Wan-kai, LONG Min
Computer Science. 2020, 47 (1): 315-320.  doi:10.11896/jsjkx.190100195
Abstract PDF(1292KB) ( 281 )   
References | Related Articles | Metrics
With the rapid development of the Internet of things,environmental monitoring system has greatly improved the efficiency and transparency of government’s daily operation.However,most existing environmental monitoring systems currently provide services in a centralized manner and rely heavily on human control.Highly centralized system architectures are vulnerable to external attacks.In addition,it is relatively easy for criminals to destroy the authenticity of data,resulting in the public trust in environmental monitoring data is not high.To resolve these problems,this paper proposed an environmental monitoring data transmission scheme based on blockchain.The data acquired by the monitoring device is delivered to the data collection terminal by signature,and the data collection terminal verifies the data and writes it to the blockchain.Smart contracts analyze the data in real time and then issue the results.Then,PBFT consensus algorithm based on grouping was proposed to improve the scalability of the system.This paper analyzes the scheme and the results show that the environmental monitoring blockchain can ensure the security,authenticity and integrity of environmental monitoring data and verifies the feasibility of the scheme with specific cases.
Energy-efficient Password Recovery Method for 7-Zip Document Based on FPGA
CHEN Xiao-jie,ZHOU Qing-lei,LI Bin
Computer Science. 2020, 47 (1): 321-328.  doi:10.11896/jsjkx.190100027
Abstract PDF(2346KB) ( 140 )   
References | Related Articles | Metrics
With the wide range of 7-Zip compression software,7-Zip password cracking is very important for information security.Currently,cracking 7-Zip encryption documents mainly uses CPU and GPU platforms,and the potential for a large password space and high computational complexity requires a higher performance computing platform to find the correct password within a limited time.Therefore,by analyzing PMC characteristics of decryption algorithm,this paper adopted reconfigurable FPGA hardware computing platform,uses pipeline technology to realize data splicing and SHA-256 algorithm,used precomputation and CSA method to optimize the key path of SHA-256 algorithm,and used dual-port RAM to store verification data,thus satisfying the computational and storage requirements of the algorithm and realizing high-performance 7-Zip decryption algorithm.The experimental data show that the optimization method in this paper can greatly improve the performance of SHA-256 algorithm,making it throughput reach 110.080Gbps.The decryption algorithm is optimized by various methods,and finally the 10bit password is cracked to10608 per second,226 times that of the CPU,1.4 times that of the GPU,and 8 times that of the GPU, which greatly improves the performance and reduces the demand for high power consumption.