Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 48 Issue 8, 15 August 2021
Database & Big Data & Data Science
Data Science Platform:Features,Technologies and Trends
CHAO Le-men, WANG Rui
Computer Science. 2021, 48 (8): 1-12.  doi:10.11896/jsjkx.210600033
Abstract PDF(1952KB) ( 4039 )   
References | Related Articles | Metrics
The concept and types of data science platform are proposed based upon in-depth studies of more than 35 data science platforms from the annual report of Magic Quadrant for Data Science Platforms since 2015.The main scientific issues in the academic research of data science platform involve the design of data science platform,the scalability of data science platform,the research and development of data science platform based on data lake,the supporting team cooperation ability of data science platform,the open strategy of data science platform and the engineering methodology of data science platform.The main features of data science platform include modular development and integration capability,DevOps,emphasis on scalability,emphasis on user experience,emphasis on citizen data scientist,and emphasis on human-machine collaboration scenario.The key technologies for the realization of data science platform are machine learning,stream processing,tidy data,containerization and data visualization.The future development trend of data science platform is mainly reflected in the integration with artificial intelligence,the support for open source technology,the emphasis on citizen data scientists,the integration of data governance,the introduction of data lake,the exploration of advanced analysis and application,the transformation to the whole pipeline of data science and the diversification of application fields.The research and development activities of data science platform should follow the design principles of activating data value as the center,human-in-the loop,DevOps,balance of usability and explainability,cultivation of data science product ecosystem,emphasis on user experience and ease of use,and integration with other business systems.At present,the research and development of data science platform needs theoretical breakthroughs in data bias and fairness,robustness and stability,privacy protection,causal analysis,trusted/responsible data science platform.
Survey of Research Progress on Cross-modal Retrieval
FENG Xia, HU Zhi-yi, LIU Cai-hua
Computer Science. 2021, 48 (8): 13-23.  doi:10.11896/jsjkx.200800165
Abstract PDF(3706KB) ( 3657 )   
References | Related Articles | Metrics
With the explosive growth of multimedia data on the Internet,single-modal retrieval has been unable to meet the needs of users,and cross-modal retrieval has emerged.Cross-modal retrieval aims to retrieve related data of one modality with data of another modality.Its core task is to extract data features and measure data correlation between different modality.This paper summarizes the recent research progress in the field of cross-modal retrieval,and summarizes the research results in the field of cross-modal retrieval from the perspectives of traditional methods,deep learning methods,manual feature hash coding methods and deep learning hash coding methods.On this basis,the performance of various algorithms in cross-modal retrieval of commonly used standard data sets is compared and analyzed.Finally,the problems of cross-modal retrieval research are analyzed and the future development trend of the field is prospected.
Seismic Data Super-resolution Method Based on Residual Attention Network
ZHOU Wen-hui, SHI Min, ZHU Deng-ming, ZHOU Jun
Computer Science. 2021, 48 (8): 24-31.  doi:10.11896/jsjkx.200900034
Abstract PDF(3843KB) ( 1539 )   
References | Related Articles | Metrics
Seismic data plays a vital role in oil and gas exploration and geological exploration.Accurate and detailed seismic data can help to provide accurate guidance for oil and gas exploration,reduce the risk of exploration,and generate huge social and economic benefits.In terms of improving the resolution of seismic data,the existing methods are difficult to recover detailed geolo-gical information when facing large amounts of data,and have poor results in high-resolution recovery,denoising performance and efficiency,and it is difficult to meet the actual needs.Seismic data can reflect the composition of geological structures and strata,and have the characteristics of high local correlation and low global correlation.At the same time,the high frequency part of seismic data usually contain important geological exploration information such as layering,chasm,etc.In view of the characteristics of seismic data,this paper innovatively transforms the problem of seismic data reconstruction into the problem of image super-resolution,and proposes a method for super-resolution of seismic data based on the generative adversarial networks.In view of the characteristics of high local correlation and low global correlation of seismic data distribution,a residual attention module is designed to mine the inherent correlation of seismic data,so as to recover more refined seismic data.By training a generative adversarial network model with a relative generative adversarial loss function,the generative network is used to perform super-resolution recovery of the seismic data to recover more refined seismic data.In this paper,experimental verification is carried out on real seismic data,and the experimental results show that the proposed method has a good effect on super-resolution of seismic data and has strong practicability.
Improved Federated Average Algorithm Based on Tomographic Analysis
LUO Chang-yin, CHEN Xue-bin, MA Chun-di, ZHANG Shu-fen
Computer Science. 2021, 48 (8): 32-40.  doi:10.11896/jsjkx.201000093
Abstract PDF(3890KB) ( 1281 )   
References | Related Articles | Metrics
In the federated average algorithm,the weight update is used to update the global model.The algorithm only considers the size of the data volume of each client when the weight is updated,and does not consider the impact of data quality on the mo-del.An improvement based on analytic hierarchy is proposed.The federated averaging algorithm is the first to process multi-source data from the perspective of data quality.First,the entropy method is used to calculate the importance of each attribute in the data,and it is used as the value of the criterion layer in the level analysis to calculate the data of each client quality.Then,combined with the amount of data on the client,the weight update method is recalculated in the global model.The simulation results show that for small and medium data sets,the model trained with support vector machines has the highest accuracy,rea-ching 85.7152%.For large data sets,the model trained with random forest has the highest accuracy,reaching 91.9321%.Compared with the traditional federal average method,the accuracy rate is increased by 3.5% on small and medium data sets and 1.3% on large data sets,which can improve the accuracy of the model while improving the security of the data and model.
Redundant RFID Data Removing Algorithm Based on Dynamic-additional Bloom Filter
DUAN Wen, ZHOU Liang
Computer Science. 2021, 48 (8): 41-46.  doi:10.11896/jsjkx.200700093
Abstract PDF(1915KB) ( 901 )   
References | Related Articles | Metrics
The high redundancy generated by RFID devices in reading tag information will result in pressure of real-time transmission,waste of storage space and unreliable analysis results of upper application.To slove these problems,a dynamic-additional Bloom filter algorithm (DATRBF) is proposed to remove redundant RFID data.Firstly,combining the characteristics of RFID data and considering the influence of time and reader,the basic Bloom filter (TRBF) is designed.Then,it is decided whether to adjust or add additional TRBF dynamically according to the change of data amount in a fixed time interval,and the misjudgment rate is controlled within the threshold by expanding bit array with additional TRBF.Finally,combining the two filters to judge whe-ther the data is redundant or not.The experiment proves that the DATRBF algorithm has obvious advantages over the traditional Bloom filter (BF) and temporal-spatial Bloom filter (TSBF) when filtering the redundant data stream of RFID.When the data amount fluctuates randomly,the misjudgment rate of DATRBF is about 49% of that of TSBF on average,and the DATRBF algorithm can maintain a stable and low misjudgment rate when the data amount continues to rise.
Multiple Kernel Clustering via Local Regression Integration
DU Liang, REN Xin, ZHANG Hai-ying, ZHOU Peng
Computer Science. 2021, 48 (8): 47-52.  doi:10.11896/jsjkx.201000106
Abstract PDF(1461KB) ( 830 )   
References | Related Articles | Metrics
Multiple kernel methods less consider the intrinsic manifold structure of multiple kernel data and estimate the consensus kernel matrix with quadratic number of variables,which makes it vulnerable to the noise and outliers within multiple candidate kernels.This paper first presents the clustering method via kernelized local regression(CKLR).It captures the local structure of kernel data and employs kernel regression on the local region to predict the clustering results.Moreover,this paper further extends it to perform clustering via the multiple kernel local regression(CMKLR).We construct the kernel level local regression sparse coefficient matrix for each candidate kernel,which well characterize the kernel level manifold structure.We then aggregate all the kernel level local regression coefficients via linear weights and generate the consensus sparse local regression coefficient,which largely reduces the number of candidate variables and becomes more robust against noises and outliers within multiple kernel data.Thus,the proposed method CMKLR avoids the above two limitations.It only contains one additional hyper parameter for turning.Extensive experimental results show that the clustering performance of the proposed method on benchmark data set is better than that of 10 state-of-the-art multiple kernel clustering methods.
Structure Preserving Unsupervised Feature Selection Based on Autoencoder and Manifold Regularization
YANG Lei, JIANG Ai-lian, QIANG Yan
Computer Science. 2021, 48 (8): 53-59.  doi:10.11896/jsjkx.200700211
Abstract PDF(3508KB) ( 769 )   
References | Related Articles | Metrics
There are a lot of redundant and irrelevant features in high-dimensional data,which seriously affect the efficiency and quality of data mining and the generalization performance of machine learning.Therefore,feature selection has become an important research direction in the computer field.In this paper,an unsupervised feature selection algorithm is proposed by using the non-linear learning ability of the autoencoder.First,based on the reconstruction error of the autoencoder,a single feature is selec-ted which is important for data reconstruction.Second,the feature weights finally select the feature subsets that contribute greatly to the reconstruction of other features.Manifold learning is introduced to capture the local and non-local structure of the original data space,and L2/1 sparse regularization is added to the feature weights to improve the sparsity of the feature weights so that they can select more distinctive features.Finally,a new objective function is constructed,and a gradient descent algorithm is used to optimize the proposed objective function.Experiments on six different types of typical data sets,and the proposed algorithm is compared with five commonly used unsupervised feature selection algorithms.Experiment results verify that the proposed algorithm can effectively select important features,significantly improve the classification accuracy rate and clustering accuracy rate.
Text Matching Method Based on Fine-grained Difference Features
WANG Sheng, ZHANG Yang-sen, CHEN Ruo-yu, XIANG Ga
Computer Science. 2021, 48 (8): 60-65.  doi:10.11896/jsjkx.200700008
Abstract PDF(2010KB) ( 1372 )   
References | Related Articles | Metrics
Text matching is one of the key technologies in the retrieval system.Aiming at the problem that the existing text ma-tching models can't capture the semantic differences of texts accurately,this paper proposes a text matching method based on fine-grained difference features.Firstly,the pre-trained model is used as the basic model to extract the matching text semantics and preliminarily match them.Then,the idea of adversarial learning is introduced in the embedding layer,and by constructing the virtual confrontation samples artificially for training,the learning ability and generalization ability of the model are improved.Finally,by introducing the fine-grained difference feature of the text to correct the preliminary prediction results of the text ma-tching,the capture ability of the model for fine-grained difference features is effectively improved,and then the performance of the text matching model is improved.In this paper,two datasets are tested,and the experiment on LCQMC dataset shows that the performance index of ACC is 88.96%,which is better than the best known model.
Tensor Completion Method Based on Coupled Random Projection
YANG Hong-xin, SONG Bao-yan, LIU Ting-ting, DU Yue-feng, LI Xiao-guang
Computer Science. 2021, 48 (8): 66-71.  doi:10.11896/jsjkx.200900055
Abstract PDF(1870KB) ( 894 )   
References | Related Articles | Metrics
In modern signal processing,the date with large scale,high dimension and complex structure need to be stored and analyzed in more and more fields.Tensors,as a high-order extension of vectors and matrices,can more intuitively represent the structure of high-dimensional data while maintaining the inherent relationship of the original data.Tensor completion plays an important role in recovering the original tensor from the noisy or missing tensor,which can be considered as an important branch of tensor and has been widely used in collaborative filtering,image restoration,data mining and other fields.This paper focuses on the drawbacks of high time complexity in the current tensor completion technology,and proposes a new method based on coupled random projection.The essential point of the proposed method consists of two parts:coupled tensor decomposition (CPD) and random projection matrix (RPM).Through the RPM,the original high-dimensional tensor is projected into the low-dimensional space to generate alternative tensor,and the tensor completion is realized in the low-dimensional space,and thus the efficiency of our method can be improved.Then,the CPD is used to realize the reconstruction of the original tensor by mapping the completed low-dimensional tensor into the high-dimensional space.Finally,the experiments are used to analyze the effectiveness and efficiency of the proposed method.
Recommendation Algorithm Based on Heterogeneous Information Network Embedding and Attention Neural Network
ZHAO Jin-long, ZHAO Zhong-ying
Computer Science. 2021, 48 (8): 72-79.  doi:10.11896/jsjkx.200800226
Abstract PDF(2391KB) ( 1253 )   
References | Related Articles | Metrics
Recommendation system,as a very effective technique to solve the information overload,has received a great deal of attention from researchers.However,the real application of recommending systems can be modeled as heterogeneous networks with multi-typed nodes and relations.Thus,heterogeneous network embedding based recommendation becomes a very hot research topic in recent years.However,most of the existing studies do not fully explore the auxiliary information and complex relations which are valuable for enhancing recommending performance.To address the above problems,a recommendation algorithm based on heterogeneous information network embedding and attention neural network is proposed.First,this paper proposes a heterogeneous information network embedding method that maintains semantic relationship and topological structure simultaneously.Then,it designs a meta-path based random walk strategy to extract node sequences from heterogeneous information networks.All the sequences are filtered and then employed to learn the embeddings for each user and item in different meta-paths.At last,this paper presents a recommendation algorithm based on attention neural network with the above embeddings as input.The attention network composed of attention layers and hidden layers is able to explore the complex relationships and hence enhance the performance of recommendation.To verify the effectiveness of the proposed method,this paper conducts experiments on two kinds of real-world datasets and makes a comparison with three competitive algorithms.The results show that the proposed algorithm improves the recommending performance in terms of MAE and RMSE,with a maximum increase of 8.9%.
Study on Judicial Data Classification Method Based on Natural Language Processing Technologies
WANG Li-mei, ZHU Xu-guang, WANG De-jia, ZHANG Yong, XING Chun-xiao
Computer Science. 2021, 48 (8): 80-85.  doi:10.11896/jsjkx.210300130
Abstract PDF(1505KB) ( 1820 )   
References | Related Articles | Metrics
The rapid increase in the number of judgment documents puts forward an urgent need for automated classification.However,there is a lack of method in existing studies that use judgment results as the subject of classification in the subdivision of civil cases,and therefore they cannot achieve accurate classification of judgment results in civil cases.In this paper,we apply deep learning technology in the field of classification of judgment results of civil cases,and obtain a model with better perfor-mance in this field through horizontal comparison of multiple deep learning models.This model is further optimized based on the data characteristics of the judgment document.After experiments,the Transformer model's macro precision rate,macro recall rate and macro F1 score in the judgment result classification are all higher than other models.By adjusting the data preprocessing process and adjusting the position embedding method of the Transformer model,the performance index of the model is increased by 1%~2%.
Power Knowledge Text Mining Based on FP-Growth Algorithm and GRNN
BAI Yong, ZHANG Zhan-long, XIONG Jun-di
Computer Science. 2021, 48 (8): 86-90.  doi:10.11896/jsjkx.210600031
Abstract PDF(1738KB) ( 854 )   
References | Related Articles | Metrics
In order to improve the performance of power knowledge text mining,FP-Growth algorithm is used to mine the strong correlation factors that affect the power demand,and GRNN algorithm is used to realize the power demand forecasting.Firstly,the index of the power text to be mined is extracted and encoded to generate the initial FP-Tree.Then,FP-Growth algorithm traverses all FP-Tree generated frequent sets,filters out the items less than the minimum support,leaves the frequent items with higher frequency.And then according to the updated FP-Tree statistical correlation items,it selects variables with strong correlation with the growth rate of total electricity consumption to generate training samples.Finally,the GRNN algorithm is used to train the power demand text,input the power demand forecasting samples,set the smoothing factor,and obtain the power demand forecasting results through the output and weighted sum of the mode layer.Experimental results show that better power text mining performance can be obtained by setting the minimum support and the smoothing factor of GRNN.Compared with common mining algorithms,this algorithm can obtain higher accuracy of power demand forecasting.
Computer Graphics & Multimedia
Remote Sensing Image Pansharpening Feedback Network Based on Perceptual Loss
WANG Le, YANG Xiao-min
Computer Science. 2021, 48 (8): 91-98.  doi:10.11896/jsjkx.200700112
Abstract PDF(2909KB) ( 922 )   
References | Related Articles | Metrics
Pansharpening aims to sharpen a low-resolution multi-channel multispectral(MS) image from a high-resolution single-channel panchromatic(PAN) image to obtain a high resolution multispectral(HRMS) image,which is an important task in remote sensing image processing.A feedback network based on perceptual loss is proposed.First,the detail information and the spectral information are extracted from the PAN image and the MS image respectively,and then they are combined to use the stacked up-and down-sampling layers and dense connections for information fusion.The feedback connection is used to enrich the low-level information with high-level information.Finally,the HRMS image is reconstructed.Compared with the traditional pansharpening algorithms,the proposed algorithm uses the PAN image and the HRMS image as the supervision of the network,and the output image contains richer spatial detail information by obtaining the perceptual loss of the PAN image and the network reconstructed HRMS image.The experimental results show that the proposed algorithm has better results than the widely used algorithms both in objective evaluation and visual perception.
Full Reference Color Image Quality Assessment Method Based on Spatial and Frequency Domain Joint Features with Random Forest
YANG Xiao-qin, LIU Guo-jun, GUO Jian-hui, MA Wen-tao
Computer Science. 2021, 48 (8): 99-105.  doi:10.11896/jsjkx.200700106
Abstract PDF(2679KB) ( 688 )   
References | Related Articles | Metrics
This paper is to design an objective evaluation algorithm that automatically evaluates image quality and is consistent with the human visual system.In view of the fact that most traditional full reference image quality assessment methods only analyze images in the spatial domain,and have shortcomings in pooling strategies,this paper proposes a random forest based spatial-frequency domain joint feature full reference color image quality evaluation method.Firstly,this method extracts the chroma and gradient features in the spatial domain,which are used to characterize the color information and spatial structure information of images.The texture detail information of the response of the log-Gabor filter bank and spatial frequency features are extracted in the frequency domain,which are used to be joint features.Then,random forest is implemented for learning the mapping relationship between the feature vector and the subjective opinion score to predict the objective quality score.Experiments conducted on three standard databases,i.e.TID2013,TID2008,and CSIQ show that the comprehensive evaluation performance by our method is better than the state-of-the-art full reference assessment algorithms,especially on TID2013 database,the Pearson linear correlation coefficient value can reach 0.9397.
Lightweight Anchor-free Object Detection Algorithm Based on Keypoint Detection
GONG Hao-tian, ZHANG Meng
Computer Science. 2021, 48 (8): 106-110.  doi:10.11896/jsjkx.200700161
Abstract PDF(1891KB) ( 1094 )   
References | Related Articles | Metrics
According to the large number of parameters of key-point object detection network and the problem of mismatching of bounding box,this paper proposes a lightweight key point anchor-free object detection algorithm.It inputs the image into the improved hourglass network to extract features,through the cascade corner pooling module and center pooling module,outputs three key points heatmap and their embedding vectors.At last,it matchs the key points by embedding vectors and draw the bounding box.The innovation of this paper is to applying the firemodule of SqueezeNet in the CenterNet object detection network,and replace the conventional convolution in the backbone with the depth separable convolution.At the same time,aiming at the mismatching bounding box problem in CenterNet,this algorithm adjusts the network's output and loss function.Experiment results show that the model size is reduced to 1/7 of CenterNet,while the accuracy and inference speed are still higher than the same size target detection algorithm like YOLOv3 and CornerNet-Lite.
Improved FCM Brain MRI Image Segmentation Algorithm Based on Tamura Texture Feature
QIAO Ying-jing, GAO Bao-lu, SHI Rui-xue, LIU Xuan, WANG Zhao-hui
Computer Science. 2021, 48 (8): 111-117.  doi:10.11896/jsjkx.200700003
Abstract PDF(3273KB) ( 751 )   
References | Related Articles | Metrics
To solve the problems of noise sensitivity and initial clustering center randomness in the segmentation of brain MRI images by FCM algorithm,an improved FCM image segmentation algorithm based on Tamura texture feature is proposed.Firstly,the Tamura texture feature of the image is extracted,and it is linearly weighted with the gray feature to form a fusion feature.Then,the density of pixel is calculated by using fuzzy neighborhood relation,and the initial cluster center is selected by combining it with distance relation.Finally,the fusion feature is used as a feature constraint for updating membership and clustering center.In the experiment,FCM,D-FCM,WKFCM and the proposed method are used to segment the images in Brain Web MRI dataset,and their anti-noise performance,accuracy and operation efficiency are compared.Experimental results show that the proposed algorithm can obtain better initial clustering centers,has better robustness in processing noise and gray inhomogeneity images,and can segment brain MRI images quickly and effectively.
Choroidal Neovascularization Segmentation Combining Temporal Supervision and Attention Mechanism
YE Zhong-yu, WU Meng-lin
Computer Science. 2021, 48 (8): 118-124.  doi:10.11896/jsjkx.200600150
Abstract PDF(2776KB) ( 1197 )   
References | Related Articles | Metrics
Choroidal neovascularization (CNV) generally occurs at the late stage of senile macular degeneration (AMD),and accurate segmentation of CNV in optical coherence tomography (SD-OCT) is of great significance for the diagnosis and treatment of AMD.This paper proposes a CNV multi-task segmentation network that combines time series model and attention mechanism.The continuous SD-OCT image is input into the segmentation network,and the multi-scale information of the picture is extracted in the encoder part.In order to better extract the local features of the picture,the attention gate is added in the skip connection part.In order to solve the problem of discontinuous scanning segmentation,after the segmentation network is pooled,the timing constraint network is passed to generate the continuity constraint of adjacent frames and gradient constraints are added to the loss function to better preserve the lesion boundary.The spatial pyramid is used to fuse the two parts of the network feature map to produce segmentation loss,which improves the final segmentation accuracy.Based on patient independence,effective cross-validation is performed on 200 eyes of 12 patients.The Dice coefficient reaches 76.3% and the overlap reaches 60.7%.CNV can be reliably segmented in SD-OCT images.
Hyperspectral Image Denoising Based on Nonconvex Low Rank Matrix Approximation and TotalVariation Regularization
TAO Xing-peng, XU Hong-hui, ZHENG Jian-wei, CHEN Wan-jun
Computer Science. 2021, 48 (8): 125-133.  doi:10.11896/jsjkx.200400143
Abstract PDF(4373KB) ( 1163 )   
References | Related Articles | Metrics
Hyperspectral images (HSIs) are often interfered by hybrid noise in the acquisition process,which seriously weakens the performance of subsequent applications of HSIs.In this paper,nonconvex regularizer is used to reconstruct the approximation problem instead of the traditional nuclear norm,which guarantees a tighter approximation of the original sparsity constrained rank function.Then a hybrid noise removal model integrating nonconvex surrogate function,total variation regularization and l2,1 norms together into a unified framework is proposed.The proposed algorithm aims to decompose the degraded HSIs into low rank components and sparse terms in the matrix mode,and uses total variation regularization to maintain edge information and improve the spatial piecewise smoothness of the HSIs.Finally,using the special properties of nonconvex surrogate function,an iterative algorithm based on augmented Lagrangian multiplier method is used for optimization.Extensive experiments on several well-known datasets are conducted for model evaluation,and the results show that the proposed algorithm can not only effectively remove hybrid noise,but also can better maintain the structure and details of the images.Compared with other existing hyperspectral denoising methods,the visual effects and quantitative evaluation results of the proposed algorithm are significantly better.
Study on Aerial Image Fast Registration from UAV
HU Yu-cheng, RUI Ting, YANG Cheng-song, WANG Dong, LIU Xun
Computer Science. 2021, 48 (8): 134-138.  doi:10.11896/jsjkx.200600140
Abstract PDF(3750KB) ( 1100 )   
References | Related Articles | Metrics
In order to improve the real time of the UAV aerial image registration,the paper analyzes the relative stability of UAV's altitude and the lack of high-frequency details in the image,proposes an improved SIFT feature point extraction algorithm and constructs a special aerial images dataset for image mosaic for experimental verification.The paper first analyzes the theoretical basis and implementation method of scale invariance of SIFT (Scale Invariant Feature Transform),and puts forward eliminating redundant performance.The measures,such as reduction of Octave and Level of Gauss pyramid,and selecting the third Level image in each Octave to detect extreme points are taken to reduce the scale of differential scale space.Lastly,the comparable experiments based on dataset with state-of-art image mosaic methods are conducted.The experimental results show that the method proposed in this paper can extract robust feature points,and the matching time is only 1/10 of the original sift,which provides technical support for real-time image mosaic of UAV.
Image Super-resolution Reconstruction Using Recursive ResidualNetwork Based on ChannelAttention
GUO Lin, LI Chen, CHEN Chen, ZHAO Rui, FAN Shi-lin, XU Xing-yu
Computer Science. 2021, 48 (8): 139-144.  doi:10.11896/jsjkx.200500150
Abstract PDF(2691KB) ( 1029 )   
References | Related Articles | Metrics
In recent years,deep learning has been widely used in image super-resolution reconstruction.To solve the problems of inadequate feature extraction,loss of details and gradient disappearance in super-resolution reconstruction methods based on deep learning,a deep recursive residual neural network model based on channel attention is proposed for single image super-resolution reconstruction.The proposed model constructs a simple recursive residual network structure by residual nested networks and jump connections to deepen the network and speed up its convergence while avoiding network degradation and gradient problems.An attention mechanism is introduced into the feature extraction part to improve the discriminant learning ability of the network for more accurate and more effective extraction of deep residual features,which is combined with the subsequent reconstruction network with parallel mapping structure to ensure the final accurate reconstruction.Quantitative and qualitative assessments are performed on benchmark dataset Set5,Set14,B100 and Urban100 at the magnification of 2,3 and 4 times by comparison with the mainstream methods.Experimental results show that the objective index values of the proposed method increase significantly compared to the comparative methods on all four test data sets.Among them,compared with the interpolation method and the SRCNN algorithm,the average PSNR improves 3.965dB and 1.56dB,3.19dB and 1.42dB,2.79dB and 1.32dB,respectively,at the magnification of 2,3 and 4 times.Visual effects show that the proposed method can recover image details better.
Multi-Shared Attention with Global and Local Pathways for Video Question Answering
WANG Lei-quan, HOU Wen-yan, YUAN Shao-zu, ZHAO Xin, LIN Yao, WU Chun-lei
Computer Science. 2021, 48 (8): 145-149.  doi:10.11896/jsjkx.200800207
Abstract PDF(2588KB) ( 950 )   
References | Related Articles | Metrics
Video question answering is a challenging task of significant importance toward visual understanding.However,current visual question answering (VQA) methods mainly focus on a single static image,which is distinct from the sequential visual data we faced in the real world.In addition,due to the diversity of textual questions,the VideoQA task has to deal with various visual features to obtain the answers.This paper presents a multi-shared attention network by utilizing local and global frame-level visualinformation for video question answering (VideoQA).Specifically,a two-pathway model is proposed to capture the global and local frame-level features with different frame rates.The two pathways are fused together with the multi-shared attention by sharing the same attention funtion.Extensive experiments are conducted on Tianchi VideoQA dataset to validate the effectiveness of the proposed method.
Binocular Image Segmentation Based on Graph Cuts Multi-feature Selection
JIN Hai-yan, PENG Jing, ZHOU Ting, XIAO Zhao-lin
Computer Science. 2021, 48 (8): 150-156.  doi:10.11896/jsjkx.200800221
Abstract PDF(2739KB) ( 710 )   
References | Related Articles | Metrics
Binocular image segmentation is crucial for subsequent applications such as stereoscopic object synthesis and 3D reconstruction.Since binocular images contain scene depth information,it is difficult to obtain ideal segmentation results by applying monocular image segmentation methods to binocular images directly.At present,most binocular image segmentation methods use the depth feature of the binocular image as an additional channel for the color feature.Only the color feature and the depth feature are simply integrated,and the depth feature of the image cannot be fully utilized.Based on the multi-class Graph Cuts framework,this paper proposes an interactive binocular image segmentation method.Combining features such as color,depth and texture into a graph model can make full use of different feature information.At the same time,the feature space neighborhood system is introduced in the Graph Cuts framework,which enhances the relationship between the pixels in the foreground and background areas of the image,and improves the integrity of the segmentation target.Experimental results show that the proposed method improves the accuracy of binocular image segmentation results effectively.
Monocular Visual Odometer Based on Deep Learning SuperGlue Algorithm
LIU Shuai, RUI Ting, HU Yu-cheng, YANG Cheng-song, WANG Dong
Computer Science. 2021, 48 (8): 157-161.  doi:10.11896/jsjkx.200700134
Abstract PDF(2794KB) ( 1369 )   
References | Related Articles | Metrics
Aiming at the visual odometer of feature point method,the change of illumination and view angle could lead to the instability of feature point extraction,which affects the accuracy of camera pose estimation,a monocular vision odometer modeling method based on deep learning SuperGlue matching algorithm is proposed.Firstly,the feature points are obtained by SuperPoint detector,and the resulting feature points are encoded to obtainvectors containing the coordinates and descriptors of the feature points.Then the more representative descriptors are generated by attentional GNN network.We useSinkhorn algorithm to solve the optimal score distribution matrix.Finally,according to the optimal feature matching,the camera pose is restored,and the ca-mera pose is optimized by using the minimum projection error equation.Experiments show that the proposed algorithm is not only more robust to view angle and light change than the visual odometer based on ORB or SIFT,without back-end optimization,but also the accuracy of absolute trajectory error and relative pose error is greatly improved,thus the feasibility and superiority of the deep learning based SuperGlue matching algorithm in visual slam are further verified.
Remote Sensing Image Semantic Segmentation Method Based on U-Net Feature Fusion Optimization Strategy
WANG Shi-yun, YANG Fan
Computer Science. 2021, 48 (8): 162-168.  doi:10.11896/jsjkx.200700182
Abstract PDF(2839KB) ( 879 )   
References | Related Articles | Metrics
Due to the high spatial resolution of high-resolution remote sensing images,rich ground objects information,high complexity,uneven distribution of target categories and different sizes of various ground objects,it is difficult to improve the segmentation accuracy.In order to improve the semantic segmentation accuracy of remote sensing images and solve the problem that U-Net model is limited when combining deep semantic information and shallow position information,a semantic segmentation me-thod of remote sensing images based on U-Net feature fusion optimization strategy is proposed.This method adopts the encoder-decoder structure based on U-Net network.In the feature extraction part of the network,the encoder structure of U-Net model is used to extract the feature information of multiple layers.In the feature fusion part,the jump connection structure of U-Net is retained,and at the same time,the feature fusion optimization strategy proposed in this paper is used to realize the fusion-optimization-refusion of high-level semantic features and low-level location features.In addition,the feature fusion optimization strategy uses dilated convolution to get more global features,and uses Sub-Pixel convolutional layer instead of traditional transposed convolution to achieve adaptive upsampling.This method is validated on the Potsdam dataset and Vaihingen dataset of ISPRS.The three evaluation indexes,overall classification accuracy,Kappa coefficient and mIoU in the verification are 86.2%,0.82,0.77 on Potsdam dataset,and 84.5%,0.79,0.69 on Vaihingen dataset.Compared with the traditional U-Net model,the three evaluation indicators are increased by 5.8%,8%,8% on Potsdam dataset,and 3.5%,4%,11% on Vaihingen dataset.Experimental results show that the remote sensing image semantic segmentation method based on the U-Net feature fusion optimization strategy has achieved good semantic segmentation effects on both the Potsdam dataset and the Vaihingen dataset,which can improve the accuracy of semantic segmentation of remote sensing images.
Accurate Segmentation Method of Aerial Photography Buildings Based on Deep Convolutional Residual Network
XU Hua-jie, ZHANG Chen-qiang, SU Guo-shao
Computer Science. 2021, 48 (8): 169-174.  doi:10.11896/jsjkx.200500096
Abstract PDF(2818KB) ( 875 )   
References | Related Articles | Metrics
In order to solve the problems of high cost of obtaining the top plan view of the main outline of the building in the 3D modeling scenario,low segmentation accuracy of the aerial photography building,interference on the roof of the building,etc.,a method of accurately segmenting the aerial photography building based on deep residual network is proposed,in which the positions of five points are expressed as heat maps as additional input channels of the network,and good segmentation effect is achieved in the task of accurately segmenting the aerial photography building.Experimental results show that the proposedmethod has higher segmentation accuracy and segmentation efficiency than the traditional semi-automatic segmentation method Grabcut.It has better robustness and anti-interference than DEXTR method.This method can provide high-precision top-view contour map and top-view picture of buildings for 3D reconstruction of buildings,and can also be used in the production process of aerial photography building data sets as an accurate and effective mask annotation tool or semi-automatic contour annotation tool to improve the annotation efficiency of datasets.
Stereo Track Blocks Coding System with Rotational Invariance
ZHOU Jia-li, FENG Yuan-yuan, WU Min, WU Chao
Computer Science. 2021, 48 (8): 175-184.  doi:10.11896/jsjkx.200400064
Abstract PDF(3432KB) ( 624 )   
References | Related Articles | Metrics
Because the purpose and object of coding problem are different,it is necessary to make adjustments according to diffe-rent problems.For the coding problem of track blocks,a method of representing them by two-dimensional function is proposed,and track blocks are recognized by phase correlation.Firstly,track block is expanded under the two-dimensional polar coordinate system,and it is expressed as a two-dimensional discrete function.Due to the rotational invariance of the track block,the representation of track blocks is not unique,and a parameter matrix is introduced to specify a normal representation.Secondly,the phase correlation algorithm is used to measure the similarity of two track blocks.Finally,according to basic tracks in the block and their relative position,track block is compressed and encoded out of the representation of two-dimensional discrete function.Experiments show that our method has better expression of internal spatial structure and rotational invariance,and it is more extendable than the traditional coding methods.The solution of coding and matching problem is more adaptable for track blocks building and optimizing.
Multi-band Image Self-supervised Fusion Method Based on Multi-discriminator
TIAN Song-wang, LIN Su-zhen, YANG Bo
Computer Science. 2021, 48 (8): 185-190.  doi:10.11896/jsjkx.200600132
Abstract PDF(3955KB) ( 664 )   
References | Related Articles | Metrics
In order to solve the problem that the fusion result is limited due to the over dependence on the label image when using the deep learning methods in the multi band image fusion field,a multi-band image feature-level self-supervised fusion method based on multi-discriminator generation adversarial network is proposed.Firstly,this paper designs and builds a feedback dense network as a feature enhancement module to separately extract multi-band image features and perform feature enhancement.Se-condly,it merges and connects the multi-band image feature enhanced results and reconstructs the fused image through the designed feature fusion module.Finally,the preliminary fused result and the source images of each band are input into the discriminator network respectively.Through the classification task of multiple discriminators,the generator is continuously optimized so that the output of the generator retains the characteristics of multiple band images at the same time to achieve the purpose of image fusion.Experimental results show that,compared with the current representative fusion method,the proposed method has better clarity,information volume,more detailed information,and is more in line with human visual characteristics.
Image Retrieval Method Based on Fuzzy Color Features and Fuzzy Smiliarity
WANG Chun-jing, LIU Li, TAN Yan-yan, ZHANG Hua-xiang
Computer Science. 2021, 48 (8): 191-199.  doi:10.11896/jsjkx.200800202
Abstract PDF(3894KB) ( 668 )   
References | Related Articles | Metrics
The performance of content-based image retrieval(CBIR) system mainly depends on two key technologies:image feature extraction and image feature matching.In this paper,the color features of all the images are extracted,and an appropriate fuzzy algorithm is adopted in the process of color feature extraction to gain the fuzzy color features of image.Image feature ma-tching mainly depends on the similarity between two image feature vectors.In this paper,a novel fuzzy similarity measure method is proposed it adopts the similarity between the query image and its k nearest neighbor images to constitute the k-dimensional fuzzy feature vector of the query imagem,and adopts the similarity between each retrieved image and k nearest neighbor images of the query image to constitute the k-dimensional fuzzy feature vector of each retrieved image.Then it calculates the fuzzy similarity between the k-dimensional fuzzy feature vector of the query image and the k-dimensional fuzzy feature vector of each retrieved image,and the retrieved images are fed back to users in reverse order of the fuzzy similarity.In order to verify the effectiveness of the proposed fuzzy color features,a series of experimental comparison are performed on the WANG dataset.In order to evaluate the performance of the image retrieval system based on different similarities,a series of experimental comparison are performed on WANG,Corel-5k and Corel-10K datasets.Experimental results show that the performance of the image retrieval system based on the maximum and minimum value outperforms that of the image retrieval systems based on the other three commonly used similarities.And the performance of the image retrieval system based on fuzzy similarity outperformsthat of the image retrieval system based on the maximum and minimum value.On the WANG,Corel-5k and Corel-10K datasets,the average precision of top 20 images retrieved by the image retrieval system based on fuzzy similarity is 4.92%,17.11% and 19.48% higher thanthat of top 20 images retrieved by the image retrieval system based on the maximum and minimum value respectively,and the average precision of top 100 images retrieved by the image retrieval system based on fuzzy similarity is 4.94%,22.61% and 33.02% higher that than of top 100 images retrieved by the image retrieval system based on the maximum and minimum value respectively.
Artificial Intelligence
Overview of Speech Synthesis and Voice Conversion Technology Based on Deep Learning
PAN Xiao-qin, LU Tian-liang, DU Yan-hui, TONG Xin
Computer Science. 2021, 48 (8): 200-208.  doi:10.11896/jsjkx.200500148
Abstract PDF(2629KB) ( 3139 )   
References | Related Articles | Metrics
Voice information processing technology is developing rapidly under the impetus of deep learning.The combination of speech synthesis and voice conversion technology can achieve real-time high-fidelity voice output of designated objects and content,and has broad application prospects in man-machine interaction,pan-entertainment and other fields.This paper aims to provide an overview of speech synthesis and voice conversion technology based on deep learning.First,this paper briefly reviews the development of speech synthesis and voice conversion technology.Next,it enumerates the common public datasets in these fields so that it is convenient for researchers to carry out related explorations.Then,it discusses the TTS models,including the classic and cutting-edge models and algorithms in terms of style,rhythm,speed,and compares their effects and development potentials respectively.Then,it reviews voice conversion by summarizing the voice conversion methods and optimization methods.Finally,it summarizes the applications and challenges of speech synthesis and voice conversion,and looks forward to their future development direction in model compression,few-shot learning and forgery detection,based on the problems faced by them in terms of model,application and regulation.
Survey for Performance Measure Index of Classification Learning Algorithm
YANG Xing-li
Computer Science. 2021, 48 (8): 209-219.  doi:10.11896/jsjkx.200900216
Abstract PDF(2253KB) ( 1205 )   
References | Related Articles | Metrics
In the research of classification task of machine learning,it is important for correctly evaluating the performance of the learning algorithm.In practical application,many performance measure indexes are proposed based on different perspectives.Three kinds of performance measure indexes based on error rate,confusion matrix and statistical test are introduced in this paper.The background,significance and scope of each measure index are discussed.The differences of different methods are analyzed.The future research problems and directions are also put forward and analyzed.Furthermore,the differences of these performance measure indexes are also compared by experimental data in portrait and landscape.The consistency of these performance measure indexes is also analyzed in classification algorithm selection.
DragDL:An Easy-to-Use Graphical DL Model Construction System
TANG Shi-zheng, ZHANG Yan-feng
Computer Science. 2021, 48 (8): 220-225.  doi:10.11896/jsjkx.200900045
Abstract PDF(2913KB) ( 1271 )   
References | Related Articles | Metrics
Deep learning has broad applications in various fields.However,users still need to face problems from two aspects when applying deep learning.First,deep learning has a complex theoretical background,non-professional users lack background knowledge in modeling and tuning.It is difficult for them to build performance-optimized models.Second,modules such as data preprocessing,model training,and prediction often involve more complicated programming implementations,which bring some difficulties in getting started for non-professional users who have no programming skill background.In view of the above two issues of usability,this paper proposes an easy-to-use graphical deep learning model construction system,DragDL.The purpose of DragDL is to reduce the difficulty of data preprocessing,model training,monitoring,online prediction and other tasks for users.The system is based on the PaddlePaddle framework and supports building a deep learning network structure on the canvas by dragging graphical operators,supporting inference and prediction functions,and abstracting the data preprocessing operation process into a dataflow graph,which is convenient for users to understand and debug.The system also provides visualization functions for performance monitoring during the training process.At the same time,DragDL provides a classic model library,which allows users to build new DL network by tuning the existing classic model network.DragDL is deployed based on a centralized server and Web client.The server provides a virtual machine service for submitted tasks and supports large-scale asynchronous task scheduling to have concurrent processing capabilities.
Fine-grained Sentiment Analysis Based on Combination of Attention and Gated Mechanism
ZHANG Jin, DUAN Li-guo, LI Ai-ping, HAO Xiao-yan
Computer Science. 2021, 48 (8): 226-233.  doi:10.11896/jsjkx.200700058
Abstract PDF(2623KB) ( 1393 )   
References | Related Articles | Metrics
The fine-grained sentiment analysis is one of the key problems in the area of natural language processing.By learning contextual information of the text to conduct sentiment analysis on specific aspects,it can help users and businesses to better understand the sentiment information of specific aspects of users' comments.Aiming at the task of fine-grained sentiment analysis on users' comments,a text sentiment classification model combining BiGRU-attention and Gated Mechanisms is proposed.By integrating existing sentiment resources,HOWNET evaluation sentiment dictionary is used as the seed sentiment dictionary to expand the user comment sentiment dictionary through SO-PMI algorithm,the negative dictionary and part of speech information are combined to expand the user comment sentiment knowledge as the users' comment sentiment characteristic information.Introducing word,character and sentiment characteristics as the model of input infotmation,and using BiGRU to extract deep text features,then combined with gated mechanism as well as the attention mechanism,according to the acquired aspect word information to further extract the contextual sentiment characteristics related to aspect words,the final sentiment polarity is obtained by the softmax classfier.Experimental results show that the proposed model achieves better experimental results on the AIchallenger 2018 fine-grained sentiment analysis Chinese data sets,the Macro_F1_ score value reaches 0.7218,and the performance exceeds the baseline system.
Compound Conversation Model Combining Retrieval and Generation
YANG Hui-min, MA Ting-huai
Computer Science. 2021, 48 (8): 234-239.  doi:10.11896/jsjkx.200700162
Abstract PDF(1427KB) ( 1001 )   
References | Related Articles | Metrics
Conversation model is one of the important directions of natural language processing.Today's dialogue models are mainly divided into retrieval-based methods and generation-based methods.However,the retrieval method cannot respond to questions that do not appear in the corpus,and the generation method is prone to problems with safe responses.In view of this,a compound conversation model that combines retrieval and generation is proposed,and the retrieval method and generation method are combined to make up for their shortcomings.First,K retrieval contexts and corresponding K retrieval candidate responses are obtained through the retrieval module.In the multi-response generation module,retrieval contexts are further combined to obtain several generation candidate responses.The candidate response ranking module is divided into two steps:pre-screening and post-reranking.The pre-screening part obtains the optimal retrieval response and the optimal generated response by calculating the similarity between the input question and candidate responses,and the post-reranking part further selects the most suitable answer to the input question.Experimental results show that the BLUE index increased by 6%,and the diversity index increased by 12%.
Identifying Essential Proteins by Hybrid Deep Learning Model
LIU Wen-yang, GUO Yan-bu, LI Wei-hua
Computer Science. 2021, 48 (8): 240-245.  doi:10.11896/jsjkx.200700076
Abstract PDF(2119KB) ( 912 )   
References | Related Articles | Metrics
Essential proteins are those proteins that are essential to the viability of the organism.The identification of essential proteins helps to understand the minimum requirements of cell life,discover disease-causing genes and drug targets,and is of great significance for the diagnosis and treatment of diseases and drug design.Existing methods show that integrating protein interaction networks and the relevant features of sequences can improve the accuracy and robustness of essential proteins identification.In this paper,gene expression profiles,protein interaction networks and subcellular location information are integrated,and a hybrid neural network model IEPHDL is designed.The IEPHDL model uses bidirectional gated recurrent unit to perform feature learning on gene expression profiles for the first time,and uses a deep neural network composed of multiple fully connected layers to perform deep relearning of three data features,to give full play to the advantages of bidirectional gated recurrent unit network,fully connected network and Node2vec in feature learning and representation,to achieve effective identification of essential proteins.Experiment results show that,IEPHDL has an accuracy of 88.7% for essential protein identification,an precision of 86.2%,and an AUC of 85.2%.The accuracy is 13%,8.9%,3.8% higher than the current optimal centrality method,machine learning method,and deep learning method in turn,and other indicators are also higher than the three methods.Finally,through experimental analysis,it is confirmed that the bidirectional gated recurrent unit network relies on its strong feature learning ability and plays a key role in essential protein identification.
Intelligent Assignment and Positioning Algorithm of Moving Target Based on Fuzzy Neural Network
QU Li-cheng, LYU Jiao, QU Yi-hua, WANG Hai-fei
Computer Science. 2021, 48 (8): 246-252.  doi:10.11896/jsjkx.200600050
Abstract PDF(3380KB) ( 677 )   
References | Related Articles | Metrics
In order to solve the problems of limited monitoring range,unreasonable allocation of monitoring resources,and untimely detection of moving targets in intelligent video surveillance systems under special application scenarios,the use of radar to detect electromagnetic waves has astrong penetration ability,large search range,and being not subject to special weather and optical conditions.Combined with the flexibility and maneuverability of unmanned aerial vehicles and automatic navigation vehicles,this paper proposes a radar-directed integrated linkage video surveillance model,and on this basis,studies a unified coordinate positioning system based on geodetic coordinates and intelligent assignment and positioning algorithm of moving target based on fuzzy neural network optimized by particle swarm optimization.The algorithm can automatically solve the control parameters of each camera in three dimensions of horizontal,vertical and zoom according to the radar detection signal,and combines the linkage control system to achieve real-time positioning and tracking of moving targets.Through field tests at a cultural relics protection site,the accuracy of the target positioning accuracy of the geodetic positioning system reaches 99.6%,and the accuracy rate of the intelligent assignment algorithm for moving targets based on the fuzzy neural network reaches 95%,which can achieve precise positioning and intelligent allocation of monitoring resources,and has high practical application value.
Multi-agent System Based on Stackelberg and Edge Laplace Matrix
ZHANG Jie, YUE Shao-hua, WANG Gang, LIU Jia-yi, YAO Xiao-qiang
Computer Science. 2021, 48 (8): 253-262.  doi:10.11896/jsjkx.200700032
Abstract PDF(1670KB) ( 868 )   
References | Related Articles | Metrics
Aiming at the problems of low efficiency,local conflict resolution and lack of practical application scenarios in the interaction model of multi-agent system in the distributed environment,this paper designs a multi-agent multi-slave interaction model based on Stackelberg game,which is applied to the interaction game between the controller and the participants in the command and control process.Firstly,through the optimization of Stackelberg game model and the multi-agent system of Stackelberg game of multiple leaders-follwers designed by multi-attribute decision-making,the closed-loop solution problem of Stackelberg game is solved by introducing a regular Riccati equation,which uses the optimization regularity of semi positive quadratic performance index.Then,based on graph theory,a multi-agent system model based on edge Laplace matrix is established to reduce the difficulty of solving complex problems.At last,numerical simulation and experimental analysis verify the efficiency and strong robustness of the model from many aspects.Moreover,it also proves that the proposed model is true and efficient.
Information Security
False Information in Social Networks:Definition,Detection and Control
WANG Jian, WANG Yu-cui, HUANG Meng-jie
Computer Science. 2021, 48 (8): 263-277.  doi:10.11896/jsjkx.210300053
Abstract PDF(1623KB) ( 3079 )   
References | Related Articles | Metrics
In recent years,the spread of false information on social networks has become increasingly fierce,causing serious social impact in political,economic,psychological and other aspects.Effective detection and control of false information in social networks is an important means to improve the quality of social network ecosystem and create a safe and credible network environment for people.This paper investigates the representative research in the field of false information of social networks at home and abroad in recent years,combs and gives its definition,characteristics and communication model for false news and rumors in false information,and then introduces various means and methods of detection and communication control of false information at present.Finally,this paper summarizes and analyzes the existing problems of detection and control methods,and then further discusses and puts forward the future research direction in this field.
Correlation Analysis for Key-Value Data with Local Differential Privacy
SUN Lin, PING Guo-lou, YE Xiao-jun
Computer Science. 2021, 48 (8): 278-283.  doi:10.11896/jsjkx.201200122
Abstract PDF(1897KB) ( 938 )   
References | Related Articles | Metrics
Crowdsourced data from distributed sources are routinely collected and analyzed to produce effective data-mining mo-dels in crowdsensing systems.Data usually contains personal information,which leads to possible privacy leakage in data collection and analysis.The local differential privacy (LDP) has been deemed as the de facto measure for trade-off between privacy guarantee and data utility.Currently,the key-value data is a kind of heterogeneous data types in which the key is categorical data and the value is numerical data.Achieving LDP for key-value data is challenging.This paper focuses on key-value data publishing and correlation analysis under the framework of LDP.Firstly,the frequency correlation and mean correlation in key-value data are defined.Then the indexing one-hot perturbation mechanism is proposed to provide LDP guarantees.At last,the correlation results can be estimated in the perturbed space.Theoretical analysis and experimental results on both real-word and synthetic dataset va-lidate the effectiveness of proposed mechanism.
FAWA:A Negative Feedback Dynamic Scheduling Algorithm for Heterogeneous Executor
YANG Lin, WANG Yong-jie, ZHANG Jun
Computer Science. 2021, 48 (8): 284-290.  doi:10.11896/jsjkx.200900059
Abstract PDF(3054KB) ( 638 )   
References | Related Articles | Metrics
As a new cyber defense method,the mimic defense has an excellent defense effect due to its unpredictable characteristics.Heterogeneous executors are heterogeneous components composed of various defense strategies to mimic defense.The mimic defense mechanism obtains the dynamics of defense through the dynamic scheduling of heterogeneous executors.Traditional scheduling methods have certain limitations.Because of these limitations,comprehensively considering the comprehensiveness of defense and historical defense success rate information,a new dynamic scheduling algorithm FAWA with negative feedback capability is proposed,and simulation collision experiments are designed.The network attack and defense process are compared with the defense effects of other scheduling methods.The experimental results show that in the scenario where the attacker randomly loads the attack load,the scheduling effect of the FAWA algorithm is always better than other algorithms,which can well improve the defense success.In the scenario where the attacker also adopts negative feedback loading,the scheduling effect of the FAWA algorithm is better than that of the CRA algorithm and some improved dynamic artificial weighting algorithms but weaker than FIFO.Besides,the simulation experiment compares the two types of load loading scenarios of the attacker and finds that in the random loading scenario,the defender's defense success rate is lower,indicating that the attacker's success rate is better than the negative feedback loading scenario.This conclusion shows that the attack also needs to have randomness and unpredictability in the network attack and defense game,and should not be excessively interfered and adjusted.
Incomplete Information Game Theoretic Analysis to Defend Fingerprinting
LI Shao-hui, ZHANG Guo-min, SONG Li-hua, WANG Xiu-lei
Computer Science. 2021, 48 (8): 291-299.  doi:10.11896/jsjkx.210100148
Abstract PDF(1843KB) ( 929 )   
References | Related Articles | Metrics
Fingerprinting,which is an important part of reconnaissance,the first stage of network attack killing chain,is the prerequisite of successful implementation of network attack.The promotion of the concept of active defense,especially deception defense,encourages the defenders to confuse the attackers by means of fingerprint information hiding and obfuscation,thus reducing the effectiveness of their network reconnaissance.Therefore,the defenders can obtain a certain first-mover advantage in the confrontation,and the confrontation of both sides is also advanced to the stage of reconnaissance.Deception is the strategic confrontation between the rational agents of both sides,game theory is a quantitative science to study the conflict and cooperation between rational decision players.It can model the players and actions of various defensive deception,and guide the defenders to make better use of deception technology.In this paper,the dynamic game model with incomplete information is used to analyze the interactive process from reconnaissance to attack.The possible perfect Bayesian Nash equilibrium are analyzed and calculated,and the equilibrium are discussed based on different scenarios.Suggestions are put forward for the defenders to optimize the deceptive strategy to achieve better anti-fingerprinting effect.
Differentially Private Location Privacy-preserving Scheme withSemantic Location
ZHANG Xue-jun, YANG Hao-ying, LI Zhen, HE Fu-cun, GAI Ji-yang, BAO Jun-da
Computer Science. 2021, 48 (8): 300-308.  doi:10.11896/jsjkx.200900198
Abstract PDF(2549KB) ( 877 )   
References | Related Articles | Metrics
How to realize more reasonable noise addition in location differential privacy-preserving is a hot topic issue.However,adding the same amount of noise in different locations will result in the decrease of service availability and privacy preservation.To this end,a differentially private location privacy-preserving scheme with semantic location is examined in this paper,which can systematically solve the contradiction among privacy-preserving,service availability and time overhead.The proposed method firstly constructs the expected distance by employing the framework of geo-indistinguishability,then determines the sensitivity of different locations by using the privacy quality function and requirement function,and finally adds Laplace noise to different types of region at fine granularity according to the location sensitivity.Comprehensive simulation experiments are carried out on two public datasets,which compare the proposed scheme with the existing methods in terms of query success rate based on Bayesian attack,service availability based on expected distance quantization and time overhead.The experimental results demonstrate that the proposed scheme is feasible and effective,and obtains a better trade-offs among privacy preservation,service availability and time consuming.
Human-Machine Interaction
SSVEP Stimulus Number Effect on Performance of Brain-Computer Interfaces in Augmented Reality Glasses
DU Yu-lin, HUANG Zhang-rui, ZHAO Xin-can, LIU Chen-yang
Computer Science. 2021, 48 (8): 309-314.  doi:10.11896/jsjkx.200700219
Abstract PDF(3212KB) ( 719 )   
References | Related Articles | Metrics
Steady-state visual evoked potentials-based brain-computer interfaces (SSVEP-BCI) can map visual stimuli with dif-ferent frequencies to certain commands to control external devices.In order to explore the capacitability of AR-BCI on the number of stimuli and the influence of the number of multi-target stimuli on the classification accuracy of AR-BCI,4 layouts of stimuli with different numbers are designed in this study and displayed through HoloLens (AR) glasses.The comparative analysis shows that the classification accuracy of the 4 layouts gradually decreases with the increase of the number of stimuli,and the location of stimuli affects the accuracy of stimulus classification.In the similar experimental paradigm,the classification results of PC screen and AR display terminals are compared,and it is found that the increasing number of stimuli has a great impact on the classification performance of AR-BCI.Current study indicates that the amount of stimulus is a key factor affecting the construction of AR-BCI in AR environment.
Prediction and Assistance of Navigation Demand Based on Eye Tracking in Virtual Reality Environment
ZHU Chen-shuang, CHENG Shi-wei
Computer Science. 2021, 48 (8): 315-321.  doi:10.11896/jsjkx.200500031
Abstract PDF(2854KB) ( 626 )   
References | Related Articles | Metrics
In order to solve the problems of insufficient user support and low user immersion in traditional navigation methods in complex virtual reality scenes, this paper proposes a binary classification model based on the gradient boost decision tree,which uses eye movement data before and after the user's need for auxiliary navigation in the VR environment to analyze and predict whether the user needs navigation during the task.The model is evaluated according to the user's gaze sequence,and the average precision and accuracy of the user demand judgment method are 77.6% and 77.2%,respectively.In addition,we implemente a navigation aid prototype system.By classifying the user's navigation requirement based on eye movement data,the user interface of the prototype system can automatically present the map for navigation.Experimental results showed that,compared with the traditional permanent assisted navigation method,the adaptive assisted navigation proposed in this paper can provide better user experience.
CSI Cross-domain Gesture Recognition Method Based on 3D Convolutional Neural Network
Computer Science. 2021, 48 (8): 322-327.  doi:10.11896/jsjkx.200600122
Abstract PDF(2701KB) ( 1290 )   
References | Related Articles | Metrics
Gesture recognition has important application prospects in human-computer interaction.In recent years,with the rapid development of wireless communication and the Internet of Things,WiFi devices have been deployed almost anywhere,and a large number of gesture recognition methods have appeared on WiFi channel status information.At present,most researches based on CSI gesture recognition only focus on the research of gesture recognition in known domain.For unknown domain,new data in unknown scenes need to be added for additional learning and training,otherwise the recognition accuracy will be greatly reduced,limiting practicality.To address this problem,a CSI cross-domain gesture recognition method based on 3D convolutional neural network is proposed.The method realizes cross-scene gesture recognition by extracting domain-independent features,and combining with the 3D convolutional neural network learning model.In order to verify the method,experiment uses the public dataest.For 6 different gestures,the results show that the method achieves 86.50% recognition accuracy in known domain,and achieves 84.67% recognition accuracy in unknown scenes,it can achieve cross-scene gesture recognition.
Real-time LSTM-based Multi-dimensional Features Gesture Recognition
LIU Liang, PU Hao-yang
Computer Science. 2021, 48 (8): 328-333.  doi:10.11896/jsjkx.210300079
Abstract PDF(1763KB) ( 1392 )   
References | Related Articles | Metrics
Gesture recognition is widely used in the field of sensing.There are three kinds of gesture recognition methods based on computer vision,depth sensor and motion sensor.The recognition based on motion sensor has the advantages of less input data,high speed,and direct acquisition of hand 3D information,which has gradually become a research hotspot.Traditional gesture recognition based on motion sensor can be considered as a pattern recognition problem essentially and its accuracy depends heavily on feature data sets extracted from prior experience.Different from traditional pattern recognition methods,deep learning can greatly reduce the workload of artificial heuristic feature extraction.To solve the problem of traditional pattern recognition,this paper proposes a real-time multi-dimensional features recognition method based on Long Short-Term Memory(LSTM)and the performance of the method is verified by sufficient experiment.The method defines a gesture library consisting of five basic gestures and seven complex gestures at first.Based on the kinematic features of hand posture,the angle features and displacement features are extracted and then the frequency domain features of sensor data are extracted by short-time Fourier transform(SFTF).Then,three features are inputted into deep neural network LSTM for training,so the collected gestures are classified and recognized.At the same time,in order to verify the effectiveness of the proposed method,the gesture data of six volunteers are collected as the experimental data set by self-designed hand-held experience stick.The experimental results show that the accuracy of the recognition method proposed in this paper achieves 94.38% for basic and complex gestures,and the recognition accuracy is improved by nearly 2% compared with the traditional support vector machine,K-nearest neighbor method and fully connected neural network.
Interactive Group Discovery Based on Skeleton Trajectory Aggregation Model in ClassEnvironment
GAO Yan, YAN Qiu-yan, XIA Shi-xiong, ZHANG Zi-han
Computer Science. 2021, 48 (8): 334-339.  doi:10.11896/jsjkx.201000036
Abstract PDF(2403KB) ( 797 )   
References | Related Articles | Metrics
Traditional class action recognition methods focus on the recognition of interactive behavior itself,not the group discovery.Accurate positioning and discovery of interactive groups in class environment is the basis for further individual behavior recognition,but there is a problem of missing behavior data caused by occlusion.Skeletal data represent human behavior and motion trajectory,and have the advantages of not being disturbed by light and background,and simple data expression.Aiming at multi-person interactive group discovery of skeleton data,an interactive group discovery algorithm based on skeleton trajectory aggregation (IGSTA) is proposed.Firstly,this paper standardizes the skeleton data into the human-centered coordinate system to reduce the impact on recognition accuracy of different body size and initial position of person.Secondly,a skeleton trajectory aggregation model based on multi-core representation is proposed to accurately describe the changes of students' interactive beha-vior groups.Finally,the aggregated skeleton trajectories are clustered to realize the discovery of interactive group.Kinect is used to obtain the simulated video of classroom student interaction behavior.Experiments proves the validity of the method,that is,in the case of missing skeleton nodes,the interaction group of students can be accurately found in class environment.