Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 49 Issue 7, 15 July 2022
  
Contents
Contents
Computer Science. 2022, 49 (7): 0-0. 
Abstract PDF(5082KB) ( 661 )   
RelatedCitation | Metrics
Invited Article
Study on Development Status and Countermeasures of Industrial Intranet in Enterprises
WANG Xing-wei, XIN Jun-chang, SHAO An-lin, BI Yuan-guo, YI Xiu-shuang
Computer Science. 2022, 49 (7): 1-9.  doi:10.11896/jsjkx.210900029
Abstract PDF(2855KB) ( 791 )   
References | Related Articles | Metrics
The industrial Internet has promoted the intelligent development of the entire manufacturing service system by closely connecting and integrating equipments,production lines,factories,suppliers,products and customers through an Internet platform.China attaches great importance to the development of industrial Internet,and puts forward the latest development goals in the field of industrial Internet technology.The intra-enterprise industrial Internet is an important part of industrial Internet,interconnecting industrial equipments,information systems,business processes,enterprise products and services,and personnel,and realizing efficient connections from the shop floor level to the decision-making level.Firstly,it introduces the current status of the development of intra-enterprise industrial Internet from the perspectives of intra-enterprise industrial Internet standards,intra-enterprise industrial Internet devices and intra-enterprise industrial Internet applications.Secondly,the problems in the development of intra-enterprise industrial Internet are analyzed in depth from seven aspects,that is,architecture,network identification,opera-ting system,real-time,reliability,compatibility and security.Then,the proposed measures for the development of intra-enterprise industrial Internet are presented,and the collaborative innovation mechanism and policies of intra-enterprise industrial Internet are discussed.Finally,the future development of intra-enterprise industrial Internet is summarized.
Database & Big Data & Data Science
Click-Through Rate Prediction Model Based on Neural Architecture Search
SHUAI Jian-bo, WANG Jin-ce, HUANG Fei-hu, PENG Jian
Computer Science. 2022, 49 (7): 10-17.  doi:10.11896/jsjkx.210600009
Abstract PDF(3010KB) ( 826 )   
References | Related Articles | Metrics
Click-through rate(CTR) prediction is an important task in the recommendation system.Its goal is to predict the pro-bability of a user clicking on an advertisement or item.Feature embedding and feature interacting are critical for prediction performance.Therefore,the ideas of many click-through rate prediction models are optimized based on these two aspects.However,most of the work only focus on one of the aspects,and almost all models do not distinguish features in feature interacting.The same embedding and interacting method are used when crossing the same feature with other features,which hinders the improvement of model performance.In order to solve this problem,the automatic super-field-aware feature embedding and interacting(Auto-SEI) model is proposed.Firstly,it assigns each sub-field to a super-field,and obtains the feature embedding according to the grouping,then selects appropriate interacting method for the feature pair to obtain the cross feature,and finally makes prediction.In Auto-SEI model,the division of sub-field and the selection of interacting methods are parameterized as an architecture search problem,and the neural architecture search(NAS) algorithm is used to compress the search space and make selections.A large number of experiments are conducted on three real large-scale data sets and the results show the excellent performance of the Auto-SEI model on the task of click-through rate prediction.
Fusion Algorithm for Matrix Completion Prediction Based on Probabilistic Meta-learning
QI Xiu-xiu, WANG Jia-hao, LI Wen-xiong, ZHOU Fan
Computer Science. 2022, 49 (7): 18-24.  doi:10.11896/jsjkx.210600126
Abstract PDF(2156KB) ( 760 )   
References | Related Articles | Metrics
With the rapid development of Internet social media,using recommendation algorithms to effectively model and filter massive amounts of information has become the key to predict user behavior preferences,hot spot tendency,network security si-tuation and other issues.At the same time,with the development of deep learning,graph neural network model has achieved good results in solving the dense graph structure data in recommendation system.Collaborative filtering algorithm,as the most widely used recommendation algorithm,uses user-item group interaction data to predict users' future preferences and item ratings.However,existing recommendation algorithms still face the problems of data sparseness and cold start,and lack of a good quantification of uncertainty.This paper proposes an inductive matrix completion prediction fusion algorithm based on probabilistic meta-learning(MetaIMC),which re-characterizes meta-learning from the perspective of Bayesian inference,builds a robust GNN-meta-learning model,and makes full use of data priors to build solutions for learning new tasks from sparse data.Firstly,MetaIMC can effectively use variational Bayesian inference to obtain the prior distribution,alleviate the uncertainty and ambiguity in the meta-model task training,and further improve the generalization ability of the model.Secondly,MetaIMC can implement new user reco-mmendations and solve the cold start problem without any user side information.Finally,in the two scenarios of traditional matrix completion and user cold start,the performance of the model is evaluated by using three public datasets of Flixster,Douban and Yahoo_music,which verifies the effectiveness of MetaIMC on traditional matrix completion task,and achieves the best performance on the cold start problem.
Concept Drift Detection Method for Multidimensional Data Stream Based on Clustering Partition
CHEN Yuan-yuan, WANG Zhi-hai
Computer Science. 2022, 49 (7): 25-30.  doi:10.11896/jsjkx.210600155
Abstract PDF(2469KB) ( 733 )   
References | Related Articles | Metrics
The analysis and utilization of potential information in data stream is an important part of data stream mining.Concept drift is a huge challenge for data stream mining that the distribution of data will change with time.Detecting changes in data distribution is a direct and effective method to detect concept drift.Currently,some concept drift detection methods use the tree structure or grid to establish a histogram to describe the data distribution.However,the tree structure is easy to produce inspection blind spots and leads to poor interpretability.While using the grid method on multi-dimensional data,the memory consumption is too much.To solve the above problems,a concept drift detection method for multi-dimensional data streams called partition based on uniform density clusters(PUDC) is proposed.The algorithm is based on the k-Means algorithm to partition the data with uniform density and uses the chi-square test for statistics and calculation of each partition to detect the concept drift.To ve-rify the validity of the method,four artificial datasets and three real datasets were selected for experiments.The type I and type II error rates of different dimensions of data were compared and analyzed.Experimental results show that PUDC algorithm is superior to several new algorithms in concept drift detection of multi-dimensional data streams.
Related Transaction Behavior Detection in Futures Market Based on Bi-LSTM
ZHANG Yuan, KANG Le, GONG Zhao-hui, ZHANG Zhi-hong
Computer Science. 2022, 49 (7): 31-39.  doi:10.11896/jsjkx.210400304
Abstract PDF(3425KB) ( 724 )   
References | Related Articles | Metrics
With the continuous development of the futures market,its transaction volume keeps increasing.Behind the massive transactions,some traders use related transaction behaviors to manipulate the futures market and disrupt the order of transactions,which has brought severe challenges to market supervision and risk control.How to mining potential related transaction behaviors from massive transaction data is an important task for maintaining fair transactions in the futures market.In response to this problem,this paper proposes a bidirectional long short-term memory(Bi-LSTM) network model with multi-feature information fusion,which extracts multiple dimensions of shallow feature information such as trading time,trading volume,position changes,and futures varieties from the original transaction data.The Bi-LSTM network model learns deep features from the contextualrelationship in the forward and backward directions of the time series and realizes the detection of related transaction behavior.For shallow features extraction,a multi-granularity window feature extraction method based on transaction behavior is proposed,to captures the correlation of transactions between accounts from the levels of day,hour,minute,second,etc.It solves the problems of high data dimension,large amount of data,and weak correlation of original transaction data.The model introduces the Dropout strategy to alleviate problems of slow convergence and over-fitting.Experimental results on the real data of Zhengzhou Commodity Exchange show that the proposed method evidently improves the classification precision and recall compared with some traditional classification models and RNN and LSTM network.At the same time,the ablation experiment of each dimension information proves the effectiveness of the multi-feature fusion method and the multi-granularity window strategy.In addition,the transaction data of two futures varieties are extracted for testing,and the results show that the proposed model has good generalization ability.
Random Shapelet Forest Algorithm Embedded with Canonical Time Series Features
GAO Zhen-zhuo, WANG Zhi-hai, LIU Hai-yang
Computer Science. 2022, 49 (7): 40-49.  doi:10.11896/jsjkx.210700226
Abstract PDF(2876KB) ( 2709 )   
References | Related Articles | Metrics
In recent years,the research on the classification of time series has attracted more and more attention.Advanced time series classification methods are usually based on great feature representations.Shapelet refers to the discriminative subsequences in time series,which can effectively express the local shape characteristics of time series.However,the high computational cost greatly limits the practicability of the Shapelet-based time series classification methods.In addition,traditional Shapelet can only describe the overall shape characteristics of the subsequence under Euclidean distance metric,so it is easy to be disturbed by noise and is difficult to mine other types of discriminative information contained in the subsequence.To deal with the aforementioned problems,a new time series classification algorithm,named random Shapelet forest embedded with canonical time series features,is proposed in this paper.The proposed algorithm is based on the following three key strategies:1)randomly select Shapelet and limit the scope of Shapelet to improve efficiency;2)embed multiple canonical time series features in Shapelet to improve the adaptability of the algorithm to different classification problems and make up for the accuracy loss caused by the random selection of Shapelet;3)build a random forest classifier based on the new feature representations to ensure the generalization ability of the algorithm.Experimental results on 112 UCR time series datasets show that the proposed algorithm is more accurate than the STC algorithm which is based on Shapelet exact search and the Shapelet transform technique,as well as many other types of state-of-the-art time series classification algorithms.Moreover,extensive experimental comparisons verify the significant advantages of the proposed algorithm in terms of efficiency.
Collaborative Filtering Recommendation Algorithm Based on Rating Region Subspace
SUN Xiao-han, ZHANG Li
Computer Science. 2022, 49 (7): 50-56.  doi:10.11896/jsjkx.210600062
Abstract PDF(2625KB) ( 546 )   
References | Related Articles | Metrics
Collaborative filtering(CF) recommendation algorithm is widely used because of its reasonable interpretability and simple process.However,datasets in recommendation systems have the characteristics of large scale,high sparsity and high dimensionality,which bring a great challenge for CF recommendation algorithms.To alleviate the above issues,this paper proposes a collaborative filtering recommendation algorithm based on the rating region subspace(RRS).According to the user-item rating matrix,RRS firstly divides the scoring range into three different regions:high scoring region,medium scoring region and low scoring region.On the basis of these three regions,each user finds its item subspaces,that is,high rating subspace,medium rating subspace and low rating subspace.A new similarity measurement method is defined to calculate the rating support between users in each region subspace.Only if the rating supports of users in all subspaces are high,the users are similar,which avoids the ra-ting interference of lazy users.Experimental results show that the proposed method can solve the issue of data sparsity to a certain extent,reduce the computational complexity and improve the recommendation performance,especially on high-dimensional datasets.
Application of Graph Neural Network Based on Data Augmentation and Model Ensemble in Depression Recognition
YANG Bing-xin, GUO Yan-rong, HAO Shi-jie, Hong Ri-chang
Computer Science. 2022, 49 (7): 57-63.  doi:10.11896/jsjkx.210800070
Abstract PDF(2620KB) ( 792 )   
References | Related Articles | Metrics
At present,the mainstream diagnosis of depression is through the communication between doctors and patients,filling in the relevant questionnaire,which needs corresponding clinical knowledge and is subjective.It brings a lot of challenges to the diagnosis of depression.Using information processing technology to analyze physiological signals and build an accurate and objective auxiliary diagnosis model is of great value.However,the sample size of the public data set of depression auxiliary diagnosis is generally small,which makes the accuracy of auxiliary diagnosis is generally low.On this basis,this paper proposes a graph neural network (GNN) method for depression recognition based on data augmentation and model ensemble strategy.The method uses 128 channel EEG signals of 53 subjects and segments the collected EEG data.After data augmentation,Pearson correlation coefficient is used to calculate the correlation between different channels to construct a brain network,graph neural network is used to learn the features of brain network,and the final prediction results are obtained by majority voting with model ensemble strategy.Experimental results show that the proposed method improves the classification ability of the network and solves the problem of poor classification performance caused by small sample size.The proposed method achieves 77% classification accuracy on the MODMA data set(including 24 patients with depression and 29 normal controls) provided by the Pervasive Sensing and Intelligent Systems Laboratory of Lanzhou University.The classification accuracy of the proposed method is significantly improved compared to other methods.
Parallel Support Vector Machine Algorithm Based on Clustering and WOA
LIU Wei-ming, AN Ran, MAO Yi-min
Computer Science. 2022, 49 (7): 64-72.  doi:10.11896/jsjkx.210500040
Abstract PDF(2356KB) ( 488 )   
References | Related Articles | Metrics
Aiming at the problems of parallel support vector machine(SVM) being sensitive to redundant data,poor parameter optimization ability and load imbalance in parallel process in the big data environment,a parallel support vector machine algorithm—MR-KWSVM,based on clustering algorithm and whale optimization algorithm,is proposed.Firstly,the algorithm proposes K-means and fisher(KF) strategy to delete redundant data,and trains SVM with the data set after the redundant data is deleted,which effectively reduces the sensitivity of SVM to redundant data.Secondly,the improved whale optimization algorithm based on nonlinear convergence factor and self-adaptive inertia weight(IW-BNAW) is proposed,and the IW-BNAW algorithm is used to obtain the SVM optimal parameters and improve the parameter optimization ability of the support vector machine.Finally,in the process of constructing parallel SVM with MapReduce,a time feedback strategy(TFB) is proposed for load scheduling of reduce nodes,which improves the parallel efficiency of the cluster and achieves high parallel SVM.Experiment results show that the proposed algorithm not only guarantees the high parallel computing power of SVM in big data environment,but also significantly improves the classification accuracy of SVM,and it has better generalization performance.
Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification
HU Yan-yu, ZHAO Long, DONG Xiang-jun
Computer Science. 2022, 49 (7): 73-78.  doi:10.11896/jsjkx.210500092
Abstract PDF(2322KB) ( 511 )   
References | Related Articles | Metrics
Cancer is one of the deadliest diseases in the world.Using machine learning to process microarray data plays an important role in assisting the early diagnosis of cancer,but the numbers of genetic features are much more than samples,leading to an imbalance in the sample,and the efficiency and accuracy of classification are affected,so it is important to select the feature of gene array data.Most of the existing feature selection algorithms are single condition feature selection,which seldom consider feature extraction.Most of them use the long-existing neural network and have low classification accuracy.So,a two-stage deep feature selection(TSDFS) algorithm is proposed.The first stage aggregates three feature selection algorithms for comprehensive feature selection,and feature subsets are obtained.In the second stage,unsupervised neural network is used to obtain the best representation of feature subset and improve the final classification accuracy.This paper analyzes the effectiveness of TSDFS by comparing the classification effect before and after feature selection and different feature selection algorithms.Experimental results show that TSDFS algorithm can reduce the number of features while maintaining or improving the accuracy of classification.
Computer Graphics & Multimedia
Survey on Action Quality Assessment Methods in Video Understanding
ZHANG Hong-bo, DONG Li-jia, PAN Yu-biao, HSIAO Tsung-chih, ZHANG Hui-zhen, DU Ji-xiang
Computer Science. 2022, 49 (7): 79-88.  doi:10.11896/jsjkx.210600028
Abstract PDF(2123KB) ( 1936 )   
References | Related Articles | Metrics
Action quality assessment refers to evaluate the action quality performed by human in video,such as calculating the quality score,level and evaluating the performance of different people.It is an important direction in video understanding and computer vision research.This paper summarizes the main methods of action quality assessment,including action quality score prediction methods,level classification and ranking methods.The performance of these methods on public datasets is also analyzed.Finally,the challenge problems in future research are discussed.
Survey on Visualization Technology for Equipment Condition Monitoring
YANG Xiao, WANG Xiang-kun, HU Hao, ZHU Min
Computer Science. 2022, 49 (7): 89-99.  doi:10.11896/jsjkx.210900167
Abstract PDF(6017KB) ( 915 )   
References | Related Articles | Metrics
With the development of sensors and digital technology,more and more equipment and production environment are equipped with sensors and corresponding information systems.These sensors collect and transmit a lot of valuable data.The visua-lization technology for equipment condition monitoring,on the one hand,can integrate the professional experience of operators to objectively evaluate the operation status of the equipment,on the other hand,can intuitively explain the results of the data model,so as to carry out intelligent analysis of human-computer cooperation on the data.This paper summarizes the related research work of data visualization in equipment condition monitoring,and summarizes the general visualization process for equipment condition monitoring.Firstly,according to the data characteristics,the equipment condition monitoring data is divided into network data,spatiotemporal data,multidimensional data and statistical data.Then,on the basis of summarizing the general visualization process of the scene,four kinds of analysis tasks are summarized:condition monitoring,correlation analysis,abnormal reason reasoning and condition prediction.For each kind of analysis task,the visualization technology is summarized.Finally,the challenges and future trends of equipment condition monitoring visualization are summarized and prospected.
Photorealistic Style Transfer Guided by Global Information
ZHANG Ying-tao, ZHANG Jie, ZHANG Rui, ZHANG Wen-qiang
Computer Science. 2022, 49 (7): 100-105.  doi:10.11896/jsjkx.210600036
Abstract PDF(3779KB) ( 611 )   
References | Related Articles | Metrics
Different from artistic style transfer,the challenge of photorealistic style transfer is to maintain the authenticity of the output while transferring the color style of the style input.Now,most photorealistic style transfer methods perform pre-proces-sing or post-processing based on artistic style transfer methods,to maintain the authenticity of the output image.However,artistic style transfer methods usually cannot make full use of global color information to achieve a more coordinated overall impression,and pre-processing and post-processing operations are often tedious and time-consuming.To solve the above problems,this paper establishes a photorealistic style transfer network guided by global information,and proposes a color-partition-mean loss(Lcpm) to measure the similarity of the global color distribution between output and the style input.Adaptive instance normalization(AdaIN) is improved,and partition adaptive instance normalization(AdaIN-P) is proposed to better adapt to the color style transfer of real images.In addition,this paper also introduces a cross-channel partition attention module to make better use of global context information and improve the overall coordination of output images.Through the above methods,the decoder of network is guided to make full use of global information.Experimental results show that,compared with other state-of-the-art me-thods,the proposed model can achieve a better photorealistic style transfer effect while maintaining image details.
Fine-grained Semantic Association Video-Text Cross-modal Entity Resolution Based on Attention Mechanism
ZENG Zhi-xian, CAO Jian-jun, WENG Nian-feng, JIANG Guo-quan, XU Bin
Computer Science. 2022, 49 (7): 106-112.  doi:10.11896/jsjkx.210500224
Abstract PDF(3452KB) ( 524 )   
References | Related Articles | Metrics
With the rapid development of mobile network and we-media platform,lots of video and text information are generated,which bring an urgent demand for video-text cross-modal entity resolution.In order to improve the performance of video-text cross-modal entity resolution,a novel fine-grained semantic association video-text cross-model entity resolution model based on attention mechanism(FSAAM) is proposed.For each frame in video,the feature information is extracted by the image feature extraction network as a feature representation,which will be fine-tuned by the fully connected network and mapped to a common space.At the same time,the words in the text description are vectorized by word embedding,and mapped to a common space by the bi-directional recurrent neural network.On this basis,an adaptive fine-grained video-text semantic association method is proposed to calculate the similarity between each word in text and the frame in video.The attention mechanism is used for weighted summation to obtain the semantic similarity between the frame in video and the text description,and frames with small semantic similarity with the text are filtered to improve the model's performance.FSAAM mainly solves the problem that there is a great quantity of redundant information in video and a large number of words with little contribution in text,and it is difficult to construct video-text semantic association due to the different degree of association between words and frames.Experiments on MSR-VTT and VATEX datasets demonstrate the superiority of the proposed method.
Super-resolution Reconstruction of MRI Based on DNGAN
DAI Zhao-xia, LI Jin-xin, ZHANG Xiang-dong, XU Xu, MEI Lin, ZHANG Liang
Computer Science. 2022, 49 (7): 113-119.  doi:10.11896/jsjkx.210600105
Abstract PDF(3328KB) ( 832 )   
References | Related Articles | Metrics
The quality of MRI will affect doctor's judgment on patient's physical conditions,and the high-resolution MRI is more conducive to doctor to make an accurate diagnosis.Using computer technology to perform super-resolution reconstruction of MRI can obtain high-resolution MRI from existing low-resolution MRI.Based on the strong generation ability of the generative adversarial networks and the unsupervised learning characteristics of the generative adversarial networks,this paper studies the MRI super-resolution algorithm based on the generative adversarial networks.It designs a generative adversarial network model DNGAN that combines ResNet structure and DenseNet structure.In this network,the WGAN-GP theory is used as the adversarial loss to stabilize the training of the generative adversarial networks.In addition,the content loss function and the perceptual loss function are also used as the loss function of the network.At the same time,in order to make better use of the rich frequency domain information of MRI,the frequency domain information of MRI is added to the network as a frequency domain loss function.In order to prove the effectiveness of DNGAN,the MRI super-resolution experimental results of DNGAN are compared with that of SRGAN and bicubic interpolation method.Experimental results show that DNGAN model can effectively perform super-resolution reconstruction of MRI.
Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction
CHENG Cheng, JIANG Ai-lian
Computer Science. 2022, 49 (7): 120-126.  doi:10.11896/jsjkx.210500157
Abstract PDF(4064KB) ( 694 )   
References | Related Articles | Metrics
The application of deep learning in the field of image semantic segmentation has greatly improved the accuracy of segmentation,but due to the limitations of speed and memory,these models can not be directly applied to embedded devices for real-time segmentation.Aiming at the problems of complex network structure and huge computation cost of semantic segmentation model,a real-time semantic segmentation algorithm based on multi-path feature extraction combined with edge detection algorithm is proposed.The model uses Sobel operator,Scharr operator and Laplacian operator to extract the contour information of the image.The algorithm designs the spatial path to extract the spatial position information of the image,designs the semantic path to extract the advanced semantic information of the image,and uses the edge detection path to extract the representative texture features of the image.The ghost lightweight module is used to reduce the amount of model parameters and improve the segmentation speed of the algorithm.Experimental results on 480 pixel and 360 pixel CamVid dataset show that the segmentation accuracy of the model can be improved on the three edge detection operators,especially when the Sobel operator with the size of 3×3 is added,the performance of the algorithm is improved most obviously,and the segmentation accuracy reaches 42.9% on the basis of the image processing speed of 349 frames/s on CamVid test set.Both the segmentation accuracy and segmentation speed achieve good results,and achieve a good balance between real-time and accuracy.
Virtual Reality Video Intraframe Prediction Coding Based on Convolutional Neural Network
LIU Yue-hong, NIU Shao-hua, SHEN Xian-hao
Computer Science. 2022, 49 (7): 127-131.  doi:10.11896/jsjkx.211100179
Abstract PDF(1928KB) ( 470 )   
References | Related Articles | Metrics
In order to improve the performance of virtual reality video intraframe prediction coding,convolutional neural network algorithm is used to select video frame coding unit(CU) to reduce the complexity of video image coding.Firstly,quantization parameters are set to obtain the virtual reality video frame samples,then the image coding tree is constructed,and the convolutional neural network (CNN) frame coding unit optimization model is established.The image brightness of frame samples is taken as the CNN input,combined with the image rate distortion cost threshold,the optimization results of the frame coding unit are obtained through training.Using CNN training optimization,the coding tree(CTU) structure with different depths and an appro-priate number of CU modules can be obtained according to the intraframe coding requirements of different texture modules of the image.Experiments show that,by reasonably setting the convolution kernel size and quantization parameters,CNN algorithm can obtain better image quality and less coding time than common video intraframe prediction coding algorithms.
Head Fusion:A Method to Improve Accuracy and Robustness of Speech Emotion Recognition
XU Ming-ke, ZHANG Fan
Computer Science. 2022, 49 (7): 132-141.  doi:10.11896/jsjkx.210100085
Abstract PDF(4630KB) ( 845 )   
References | Related Articles | Metrics
Speech emotion recognition(SER) refers to the use of machines to recognize the emotions of a speaker from speech.SER is an important part of human-computer interaction(HCI).But there are still many problems in SER research,e.g.,the lack of high-quality data,insufficient model accuracy,little research under noisy environments.In this paper,we propose a method called Head Fusion based on the multi-head attention mechanism to improve the accuracy of SER.We implemente an attention-based convolutional neural network(ACNN) model and conduct experiments on the interactive emotional dyadic motion capture(IEMOCAP) data set.The accuracy is improved to 76.18% (weighted accuracy,WA) and 76.36% (unweighted accuracy,UA).To the best of our knowledge,compared with the state-of-the-art result on this dataset(76.4% of WA and 70.1% of WA),we achieve a UA improvement of about 6% absolute while achieving a similar WA.Furthermore,We conduct empirical experiments by injecting speech data with 50 types of common noises.We inject the noises by altering the noise intensity,time-shifting the noises,and mixing different noise types,to identify their varied impacts on the SER accuracy and verify the robustness of our model.This work will also help researchers and engineers properly add their training data by using speech data with the appropriate types of noises to alleviate the problem of insufficient high-quality data.
Person Re-identification Method Based on GoogLeNet-GMP Based on Vector Attention Mechanism
MENG Yue-bo, MU Si-rong, LIU Guang-hui, XU Sheng-jun, HAN Jiu-qiang
Computer Science. 2022, 49 (7): 142-147.  doi:10.11896/jsjkx.210600198
Abstract PDF(3000KB) ( 453 )   
References | Related Articles | Metrics
In order to improve the accuracy and applicability of person re-identification(Re-ID),a Re-ID method based on vector attention mechanism GoogLeNet is proposed.Firstly,three groups of images(anchor,positive and negative) are input into the GoogLeNet-GMP network to obtain segmented feature vectors.Then,spatial pyramid pooling(SPP) is used to aggregate the features from different pyramid levels,and attention mechanism is introduced.By integrating the multi-scale pooling regions which represent the visual information of the target,the distinguishable features on multiple semantic levels are obtained.At the same time,the mixed form of two different loss functions is taken as the final loss function.Experiments on Market-15012 and Duke-MTMC3 data set show that the proposed method performs better in Rank-1 and mAP indicators than other excellent methods.
Artificial Intelligence
Advances in Chinese Pre-training Models
HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu
Computer Science. 2022, 49 (7): 148-163.  doi:10.11896/jsjkx.211200018
Abstract PDF(5427KB) ( 1903 )   
References | Related Articles | Metrics
In recent years,pre-training models have flourished in the field of natural language processing,aiming at modeling and representing the implicit knowledge of natural language.However,most of the mainstream pre-training models target at the English domain,and the Chinese domain starts relatively late.Given its importance in the natural language processing process,extensive research has been conducted in both academia and industry,and numerous Chinese pre-training models have been proposed.This paper presents a comprehensive review of the research results related to Chinese pre-training models,firstly introducing the basic overview of pre-training models and their development history,then sorting out the two classical models Transformer and BERT that are mainly used in Chinese pre-training models,then proposing a classification method for Chinese pre-training models according to model categories,and summarizes the different evaluation benchmarks in the Chinese domain.Finally,the future development trend of Chinese pre-training models is prospected.It aims to help researchers to gain a more comprehensive understanding of the development of Chinese pre-training models,and then to provide some ideas for the proposal of new models.
Robust Deep Neural Network Learning Based on Active Sampling
ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun
Computer Science. 2022, 49 (7): 164-169.  doi:10.11896/jsjkx.210600044
Abstract PDF(2677KB) ( 571 )   
References | Related Articles | Metrics
Recently,deep learning models have been widely used in various real-world tasks.Improving the robustness of deep neural networks has become an important research direction in machine learning field.Recent works show that training the deep model with noise perturbations can significantly improve the model robustness.However,its training requires a large set of precisely labeled examples,which is often expensive and difficult to collect in real-world scenario.Active learning(AL) is a primary approach for reducing the labeling cost,which progressively selects the most useful samples and queries their labels,with the target of training an effective model with less queries.This paper proposes an active sampling based neural network learning framework,which aims to improve the model robustness with low labeling cost.In this framework,the proposed inconsistency sampling strategy is employed to measure the potential utility for improving the model robustness of each unlabeled example with a series of perturbations.Then,those examples with the largest inconsistency will be selected for training the deep model with noise perturbations.Experimental results on the benchmark image classification task data set show that the inconsistency-based active sampling strategy can effectively improve the robustness of the deep neural network model with lower sample labeling cost.
Method for Abnormal Users Detection Oriented to E-commerce Network
DU Hang-yuan, LI Duo, WANG Wen-jian
Computer Science. 2022, 49 (7): 170-178.  doi:10.11896/jsjkx.210600092
Abstract PDF(3158KB) ( 734 )   
References | Related Articles | Metrics
In the e-commerce network,abnormal users often show different behavioral characteristics from normal users.Detecting abnormal users and analyzing their behavior patterns is of great practical significance to maintaining the order of e-commerce platforms.By analyzing the behavior patterns of abnormal users,we abstract the e-commerce network into the heterogeneous information network,and convert it into a user-device bipartite graph.On this basis,we propose a method for detecting abnormal users oriented to e-commerce network——self-supervised anomaly detection model(S-SADM).The model has a self-supervised learning mechanism.It uses an autoencoder to encode the user-device bipartite graph to obtain user node representations.By optimizing the joint objective function,the model completes backpropagation,and uses support vector data descriptions to perform anomaly detection on user node representations.After the automatic iterative optimization of the network,the user node representation has supervised information,and we obtain relatively stable detection results.Finally,S-SADM is validated on 3 real network datasets and a semi-synthetic network dataset,and the experimental results demonstrate the effectiveness and superiority of the method.
Implicit Causality Extraction of Financial Events Integrating RACNN and BiLSTM
JIN Fang-yan, WANG Xiu-li
Computer Science. 2022, 49 (7): 179-186.  doi:10.11896/jsjkx.210500190
Abstract PDF(2488KB) ( 945 )   
References | Related Articles | Metrics
The financial field has a large amount of information and high value,especially the implicit causal events which contains huge potential useful value.Carrying out causal analysis on financial domain text to mine the important information hidden in the implicit causal events,understanding the deeper evolutionary logic of the financial field events,to build a financial field knowledge base,which plays an important role in financial risk control and risk early warning.In order to improve the accuracy of identifying the implicit causal events in the financial field,from the perspective of feature mining,based on self-attention mechanism,an implicit causality extraction method integrating recurrent attention convolution neural network(RACNN) and bidirectional long short-term memory(BiLSTM) is proposed.This method combines RACNN that can extract more important local features of text based on an iterative feedback mechanism,BiLSTM that can better extract global features of text,and a self-attention mechanism that can more deeply dig the semantic information of fused features.Experimental results on SemEval-2010 Task 8 and financial field datasets show that the evaluation index F1 value can reach 72.98% and 75.74% respectively,which is significantly better than other comparison models.
Low Cost Accurate Calibration and Tool Path Fitting Method for Milling Robot
HE Xiao, ZHOU Jia-li, WU Chao
Computer Science. 2022, 49 (7): 187-195.  doi:10.11896/jsjkx.210500135
Abstract PDF(4377KB) ( 493 )   
References | Related Articles | Metrics
Aiming at the problems of low accuracy of absolute path fitting of milling robot and tool path error caused by spatial path fitting,a method for obtaining effective calibration without precision instrument is proposed.This paper focuses on improving the machining accuracy of milling.Firstly,the cutting error caused by path fitting is solved by recalculating and revising the control points of robot path trajectory,which provides further guarantee for the following cutting measurement accuracy.Secondly,the self-gravity and external load model are added into the robot calibration model because of the offset of tool end position caused by milling spindle.And the constraint equation and objective function containing angle data are introduced to increase the comprehensiveness of calibration data and improve the calibration efficiency.The proposed method is used to calibrate the para-meters of Kuka60 robot.Experiments show that the machining accuracy of calibrated robot improves significantly.The accuracy of milling block distance and angle decreases from 0.520 mm and 30 min to 0.240 mm and 16 min respectively,and the milling accuracy increase by 53.8% and 46.7%.
Robot Path Planning Based on Improved Potential Field Method
WANG Bing, WU Hong-liang, NIU Xin-zheng
Computer Science. 2022, 49 (7): 196-203.  doi:10.11896/jsjkx.210500020
Abstract PDF(3160KB) ( 494 )   
References | Related Articles | Metrics
Aiming at the problems of excessive gravity,local minimum point,unreachable target,trapped areas in traditional artificial potential field(APF) method,an improved potential field method based on path optimization strategy and parameter optimization is proposed.Firstly,a gravitational compensation gain coefficient is used to avoid the problem of excessive gravity.Secondly,the virtual target point setting strategy is used to solve the problem of local minimum point according to environmental information.The observation distance is set to identify the distribution of obstacles and different path strategies are selected to avoid unreachable target.Moreover,robot rotates in advance to move tangentially away from this area or uses a safe path to pass through the area.Finally,the differential evolution algorithm is used to solve the constrained optimization problem,so that the initialization parameters of artificial potential field method are no longer set based on experience.Simulation experiments show that the improved potential field method can effectively solve problems such as local minimums and unreachable targets.Compared with the traditional artificial potential field,the path length of the improved algorithm reduces by 17.5%.
Adaptive Attention-based Knowledge Graph Completion
WANG Jie, LI Xiao-nan, LI Guan-yu
Computer Science. 2022, 49 (7): 204-211.  doi:10.11896/jsjkx.210400129
Abstract PDF(3278KB) ( 889 )   
References | Related Articles | Metrics
Existing knowledge graph completion models learn a single static feature representation for entities and relationships by integrating multi-source information.But they can't represent the subtle meaning and dynamic attributes of entities and relationships that appear in different contexts.That is,entities and relationships will show different attributes,because they have different roles and meanings when they are involved in different triples.To solve above problems,an adaptive attention network for knowledge graph completion is proposed,which uses adaptive attention to model the contribution of each task-specified feature dimension,and generates dynamic and variable embedding representations for target entities and relationships.Specifically,the proposed model defines the neighbor encoder and the path aggregator to process two structures in the entity neighborhood subgraph,adaptively learn the attention weights to capture the most logically related features of the task,and to give the entities and relationships with fine-grained semantics in line with the current task.Experimental results in link prediction task show that,the MeanRank of the proposed model on FB15K-237 dataset is 6.9% lower than PathCon,and Hits@1 is 2.3% higher than PathCon.For the sparse datasets NELL-995 and DDB14,its Hits@1 reaches 87.9% and 98% respectively.Therefore,it proves that the introduction of adaptive attention mechanism can effectively extract the dynamic attributes of entities and relationships to generate a more comprehensive embedding representation,and improves the accuracy of knowledge graph completion.
Software Self-admitted Technical Debt Identification with Bidirectional Gate Recurrent Unit and Attention Mechanism
XIONG Luo-geng, ZHENG Shang, ZOU Hai-tao, YU Hua-long, GAO Shang
Computer Science. 2022, 49 (7): 212-219.  doi:10.11896/jsjkx.210500075
Abstract PDF(3237KB) ( 506 )   
References | Related Articles | Metrics
Software self-admitted technical debt(SATD) is written into the source code comments of the project by developers who leave a note admitting incurring intentionally for short-term benefits,and a large amount of SATD will be dangerous to software maintenance.In recent years,more scholars focus on the research of software SATD recognition and propose different identification approaches,such as SATD detection based on natural language processing or text mining.However,the identification results of most previous studies are not very well due to the existing thesaurus or manually extracted features,which not only consumes a lot of time,but also increases computational complexity.Therefore,a software SATD identification approach based on bidirectional gated recurrent unit(GRU) and attention mechanism is proposed.The word vector is obtained first through the Skip-gram model,and the bidirectional GRU network is constructed to obtain the high-level features.Finally,the attention mechanism is used to automatically discover words that play a key role in SATD identification,and the most important semantic information can be captured.Experimental results show that the proposed approach has excellent performance in precision,recall and F1-score.It can effectively identify software SATD and avoid complex feature engineering in traditional tasks.
Closed-loop Supply Chain Network Design Model Considering Interruption Risk and Fuzzy Pricing
WU Gong-xing, Sun Zhao-yang, JU Chun-hua
Computer Science. 2022, 49 (7): 220-225.  doi:10.11896/jsjkx.201100084
Abstract PDF(2123KB) ( 439 )   
References | Related Articles | Metrics
In order to mitigate the impact of supply interruption on enterprises,a dual objective closed-loop supply chain network model considering fuzzy pricing and interruption risk under competitive environment is proposed.The uncertain demand is defined as the price function provided to customers by the supply chain and its competitors,so that the supply chain can maximize the total profit and minimize the carbon emission in the competitive environment.In this paper,the proposed model is solved based on the possibility theory,and the dual-objective model is transformed into a single-objective model.Finally,a real case is used for numerical example analysis.The conclusion shows that the proposed model can not only enhance the ability of supply chain to resist risks,but also help to improve the strategic position in market.
Computer Network
Survey of Deep Learning for Radar Emitter Identification Based on Small Sample
SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He
Computer Science. 2022, 49 (7): 226-235.  doi:10.11896/jsjkx.210600138
Abstract PDF(2762KB) ( 1055 )   
References | Related Articles | Metrics
Traditional radar emitter identification methods can no longer meet the needs of identifying new-system radar emitters in the complicate and changeable electromagnetic environment.Deep learning methods can effectively extract the intra-pulse features of the unsorting radar emitter signal,quickly and accurately identify the radar intra-pulse modulation type,model type and emitter individual under complex environments such as low signal-to-noise ratio.However,in the reality,radar emitter signal is difficult to collect and cannot satisfy the training needs of traditional deep learning models.Therefore,the small sample radar emitter identification is one of hotspot and difficult questions of current research.Firstly,this paper reviews the research progress and application of various deep learning methods based on supervised learning for radar emitter recognition with small samples in recent years.Secondly,the research progress of radar emitter identification by small sample learning is introduced.Last,according to the current radar emitter identification research,the challenges and outlook for future research are put forward.
Analysis of Performance Metrics of Semantic Communication Systems
JIANG Sheng-teng, ZHANG Yi-chi, LUO Peng, LIU Yue-ling, CAO Kuo, ZHAO Hai-tao, WEI Ji-bo
Computer Science. 2022, 49 (7): 236-241.  doi:10.11896/jsjkx.211200071
Abstract PDF(1794KB) ( 829 )   
References | Related Articles | Metrics
Semantic communication system is currently a hot research topic in the communication field,but a mature evaluation system has not been established in this field,which leads to the different performance of semantic communication systems designed under different performance metrics.This paper mainly focuses on semantic communication systems,introducing perfor-mance metrics based on precision,performance metrics based on recall,performance metrics based on the combination of precision and recall,and performance metrics based on word vector space models.It also elaborates on the background,significance,main algorithm ideas,and scope of application of various performance metrics in semantic communication,and analyzes and compares the differences,advantages and disadvantages of each performance metric.Finally,it summarizes the problems faced by semantic communication performance metrics at this stage,and points out the future development direction of performance metrics research in semantic communication system.
Satellite Onboard Observation Task Planning Based on Attention Neural Network
PENG Shuang, WU Jiang-jiang, CHEN Hao, DU Chun, LI Jun
Computer Science. 2022, 49 (7): 242-247.  doi:10.11896/jsjkx.210500093
Abstract PDF(2170KB) ( 734 )   
References | Related Articles | Metrics
Satellite onboard autonomous task planning is one of the key technologies for the operation of earth observation satellites,which has received great attention from researchers in recent years.Considering the limited computing resources,as well as the dynamic changes of observation tasks and resource onboard,the heuristic search algorithms are mainly used to solve the satellite onboard task planning problem,and the optimization of solution needs to be improved.Firstly,a new sequential decision-ma-king framework for observation tasks is constructed in this paper.Based on this framework,an earth observation satellite can decide the observation task to be performed in real-time,without generating a plan in advance.Then,an observation task decision model based on attention mechanism,and the corresponding input feature representation method and model training method are designed.An observation task sequence algorithm based on attention neural network is proposed.Finally,based on a set of random data,the performance of the proposed algorithm,two deep learning algorithms and two heuristic online search algorithms are compared.Experimental results show that the response time of the proposed method is less than one-fifth of the existing deep learning algorithm,and the profit gap is much smaller than that of the heuristic search algorithms,which confirm the feasibility and effectiveness of our method.
Edge-Cloud Collaborative Resource Allocation Algorithm Based on Deep Reinforcement Learning
YU Bin, LI Xue-hua, PAN Chun-yu, LI Na
Computer Science. 2022, 49 (7): 248-253.  doi:10.11896/jsjkx.210400219
Abstract PDF(2584KB) ( 891 )   
References | Related Articles | Metrics
Mobile Edge Computing(MEC) is used to enhance data processing in low power networks,and it has become an efficient computing paradigm.This paper considers an edge-cloud collaborative system composed of multiple MTs and adopts a variety of offloading modes.In order to reduce the total time delay of MTs,a task offloading algorithm based on deep reinforcement learning is proposed.It implements deep neural network(DNN) as a scalable solution,learns the multi-base offloading mode from experience to minimize the total time delay.Simulation results indicate that compared with the deep Q network(DQN) algorithm and the deep deterministic policy gradient(DDPG) algorithm,the proposed algorithm can improve the maximum performance gain significantly.In addition,the proposed algorithm has good convergence,and its result can approach the optimal result obtained by exhaustive search.
Multi-task Cooperative Optimization Algorithm Based on Adaptive Knowledge Transfer andResource Allocation
TANG Feng, FENG Xiang, YU Hui-qun
Computer Science. 2022, 49 (7): 254-262.  doi:10.11896/jsjkx.210600184
Abstract PDF(2591KB) ( 790 )   
References | Related Articles | Metrics
Multi-task optimization algorithm optimizes each task separately and transfers knowledge among tasks at the same time to improve the comprehensive performance of multiple tasks.However,the negative knowledge transfer between tasks with low similarity leads to the overall performance degradation,and allocating the same computing resources to tasks with different difficulties will lead to resource waste.In addition,it is easy to fall into local optimum by using fixed search step size at different stages of the task.To solve these problems,a multi-task collaborative optimization(AMTO) algorithm based on adaptive know-ledge transfer and dynamic resource allocation is proposed.Firstly,each task is optimized by a single population,and a population is divided into three subpopulations.Three different search strategies are adopted to increase the diversity of search behavior.The search step size is dynamically updated according to the individual update success rate in a single task to enhance the adaptive search ability and avoid falling into local optimum.Secondly,the similarity between tasks is calculated online using the feedback results of knowledge transfer among multiple tasks,and the transfer probability is adjusted adaptively according to the similarity.At the same time,when the similarity between tasks is low,the task deviation should be subtracted to reduce the performance degradation caused by negative knowledge transfer and improve the perception ability of the algorithm to the differences between tasks.Then,the difficulty and optimization state of the task is estimated by evaluating the improvement degree of the task performance,and the resources are dynamically allocated on demand for the tasks with different difficulties and states to maximize the utilization value of resources and reduce the waste of resources.Finally,on the simple and the complex multi-task optimization functions,the proposed algorithm is compared with the classical multi-task algorithms to verify the effectiveness of the adaptive migration strategy,dynamic resource allocation strategy and synthesis.
Task Offloading Online Algorithm for Data Stream Edge Computing
ZHANG Chong-yu, CHEN Yan-ming, LI Wei
Computer Science. 2022, 49 (7): 263-270.  doi:10.11896/jsjkx.210300195
Abstract PDF(2316KB) ( 834 )   
References | Related Articles | Metrics
With the development of Internet of Things (IoT) technology,its application scenarios have exploded recently,and such applications are generally delay-sensitive and resource-constrained.It is a focused issue in the way of offloading the real-time tasks under the condition of limited resource.Besides,it is a NP-hard combinatorial optimization problem to allocate limited computational resources for the real-time tasks.To solve this problem,this paper proposes a real-time resource management algorithm based on Lyapunov optimization,aiming at stabilizing the virtual queues while optimizing the total power consumption and total utility.Firstly,the optimization model for the total power consumption and weighted total utility is proposed under the constraint of computation and communication resources.This model contains of two virtual buffer queues,and tasks are unloaded in a device-to-device (D2D) scheduling model.Then,an optimization algorithm is proposed based on Lyapunov optimization to decompose the joint long-term average sum energy consumption and sum utility optimization problem into a series of real-time optimization problems.To solve these problems,a greedy-based matching algorithm is proposed.Experimental results demonstrate that the performance of the proposed algorithm is 8.6% better than the best result of random method and can approximate the exhaustive attack method under different connection degrees.
Server-reliability Task Offloading Strategy Based on Deep Deterministic Policy Gradient
LI Meng-fei, MAO Ying-chi, TU Zi-jian, WANG Xuan, XU Shu-fang
Computer Science. 2022, 49 (7): 271-279.  doi:10.11896/jsjkx.210600040
Abstract PDF(3641KB) ( 468 )   
References | Related Articles | Metrics
With the popularization of smart mobile devices,a new generation of mobile applications such as face recognition and virtual reality have gradually emerged.The limited computing power and battery capacity of mobile devices cannot support applications with high computing requirements and latency-sensitive applications.Therefore,mobile edge computing(MEC) is proposed to solve this problem.However,in the MEC environment,the reliability of the edge server is low,and the possible equipment failure will lead to the existing offloading decision failure,which increases the application response time and reduces the user experience.In view of the possible failure of edge servers,and considering that the deep deterministic policy gradient(DDPG) algorithm can better deal with the problem of high-dimensional action space through the network fitting strategy function,this paper proposes a server-reliability task offloading based on deep deterministic policy gradient(SRTO-DDPG).The main work is as follows.Firstly,the failure rate of application execution is reduced by duplicating subtasks for secondary offload.Secondly,the task offloading and resource allocation problems with server reliability constraints to minimize application delay are modeled as Markov decision process(MDP).Finally,an algorithm based on DDPG is used to solve the problem.Simulation results show that the SRTO-DDPG strategy can effectively interact with the environment to obtain the optimal offloading decision,and its perfor-mance is better than the local execution strategy(LE).Compared with the single location task offloading based on deep determi-nistic policy gradient(SLTO-DDPG),this strategy can achieve a low total delay of about 26.16% under reliability constraints,and can better adapt to the reliability problems of edge servers in multi-server scenarios.
Power Consumption Scheme Oriented to Full-duplex Multi-relay Cooperative SWIPT Networks
SHAN Yong-feng, JIANG Rui, XU You-yun, LI Da-peng
Computer Science. 2022, 49 (7): 280-286.  doi:10.11896/jsjkx.210400067
Abstract PDF(3562KB) ( 433 )   
References | Related Articles | Metrics
In full-duplex multi-relay cooperative simultaneous wireless information and power transfer (SWIPT) networks,traditional relay selection algorithms have not taken into account the idle utilization of unselected relays.As a result,the performance waste of the network becomes more serious as the number of relays increases.How to exploit the remaining potential of unselec-ted relays becomes the key to improve network performance.For this,a new HTT (Harvest then Transmit) power consumption scheme is proposed,which increases the transmit power of the selected relay by making full use of the energy harvesting module at the relay,thereby further improving the system capacity.In addition,for the proposed HTT power consumption scheme,two practical scenarios,BIKT (Battery Information Known at Transmitter) and BIUT (Battery Information Unknown at Transmitter) are considered.The result of simulation experiments show that three relay selection algorithms,namely single relay selection,greedy relay selection and exhaustion search,can effectively benefit from the proposed HTT power consumption scheme in terms of system capacity and outage probability whether applied to BIKT scenarios or BIUT scenarios.
Algorithm to Construct Node-independent Spanning Trees in Data Center Network BCDC
PAN Zhi-yong, CHENG Bao-lei, FAN Jian-xi, BIAN Qing-rong
Computer Science. 2022, 49 (7): 287-296.  doi:10.11896/jsjkx.210500170
Abstract PDF(3645KB) ( 471 )   
References | Related Articles | Metrics
As the foundation of cloud computing technology,the communication performance of data center networks has become a research hotspot in recent years.And as an important infrastructure of data center networks,independent spanning trees(ISTs) attract much attention of researchers because of their application in reliable communication,fault-tolerant broadcasting and secure message distribution,and remarkable results have been obtained on some special networks.But only a few results are reported on the line graphs.A new server-centric network called BCDC was proposed in 2018.Its logic graph is the line graph of crossed cube and is (2n-2)-regular.In this paper,an algorithm is proposed to construct the independent spanning trees on BCDC.Firstly,2n-2 trees are constructed by a parallel algorithm on crossed cube.Then,connect these trees by a special rule,and transfer these trees into 2n-2 independent trees on BCDC through a way of transformation.Finally,the rest nodes of BCDC are connected to these trees by an algorithm which is proposed with time complexity O(N),where N is the number of nodes on BCDC.As a result,we will obtain 2n-2 ISTs rooted at node [ r,N(r,2)] on BCDC,where r is an arbitrary node in n dimensional crossed cube CQn.
Study on Characteristics of Millimeter-wave MIMO Channel in Rainfall Environment
WU Su-jie, ZHOU Jie, WANG Xue-ying, LYU Zhi-kang, SHAO Gen-fu
Computer Science. 2022, 49 (7): 297-303.  doi:10.11896/jsjkx.210600075
Abstract PDF(3357KB) ( 688 )   
References | Related Articles | Metrics
Millimeter wave is an important core technology of 5G communication,which has a broad development prospect.With the improvement of communication frequency,the influence of weather on signal transmission is increasing,especially in rainfall environment.In order to establish an accurate and effective communication channel model,a three-dimensional millimeter wave MIMO spatial channel model based on 25 GHz,30 GHz and 78 GHz is proposed for the base station mobile communication scena-rio in rainfall environment.In this paper,the geometric and physical characteristics of raindrops are introduced,and then the rain failure engineering model is briefly described.On this basis,the expression of dynamic rain attenuation is proposed.The model considers the non-stationary channel in time and space domain by the line-of-sight distance and non-line-of-sight path of base station and signal receiver,as well as the non-stationary of receiver motion and dynamic scattering cluster.Finally,the channel is theoretically analyzed by birth death process,and the effects of frequency,rainfall,pitch angle and other parameters on the cross-correlation function and channel capacity are deduced.The influence of pitch angle and other parameters on the correlation function and channel capacity is found to be highly fitting with the actual measurement results by comparing with the measured data in the previous literatures.The research has important theoretical and practical significance for the application of wireless communication system in rainfall environment,which extends the research and application of mobile wireless communication channel model.
Dependence Analysis Among Service Stations in Tandem Queueing Systems
GAO Ya, ZHAO Ning, LIU Wen-qi
Computer Science. 2022, 49 (7): 304-309.  doi:10.11896/jsjkx.210500218
Abstract PDF(2561KB) ( 495 )   
References | Related Articles | Metrics
There is dependence among stations in tandem queueing systems.Deep analysis of the influence of the upward station on the downward station is important for studying of the performance of the tandem queueing systems.However,the departure process of the upward station is usually non-renewal process,which causes the dependence among stations is difficult to be analyzed theoretically.This paper adopts performance ratio to study the dependence among stations.By simulations,the relationship between performance ratio and system parameters are analyzed.It is found that the upward station could magnify or reduce the mean waiting time of the downward station.The performance ratio increases with the square variation coefficient of the service time at the upward station.When the performance ratio is greater than 1,it increases with the ratio of the mean service time of the upward station to the downward station.When the performance ratio is less than 1,it decreases with the ratio of the mean service time of the upward station to the downward station.The mean waiting time of the downward station could be changed by adjusting mean or square variation coefficient of the service time at the upward station.
Information Security
Survey on Attacks and Defenses in Federated Learning
CHEN Ming-xin, ZHANG Jun-bo, LI Tian-rui
Computer Science. 2022, 49 (7): 310-323.  doi:10.11896/jsjkx.211000079
Abstract PDF(4808KB) ( 6152 )   
References | Related Articles | Metrics
Federated learning is proposed to solve the contradiction between data sharing and privacy-preserving.It aims to build collaborative models by securely interacting irreversible information (e.g.,model parameters or gradient updates).However,the risks of privacy leakage and malicious attacks in the process of model local training,information interaction and parameter transmission have brought major challenges to the practical application of federated learning.This paper summarizes the Attack beha-viors and corresponding defense strategies in the modeling and deployment process of federated learning.Firstly,this paper briefly reviews the development process of federated learning and the basic modeling process.Next,it classifies attack behaviors in fede-ral learning training and deployment from three aspects:confidentiality,availability and integrity,and combs the latest research on privacy theft and malicious attacks.Then,it summarizes defense countermeasures from two directions: honest-but-curious attac-kers and malicious attackers,and analyzes the defense capabilities of different strategies.Finally,it presents some discussions about the problems and challenges of attack and defense methods in the practice of federated learning.Besides,it looks forward to their future development direction of federated learning in defense strategy and system design.
MTDCD:A Hybrid Defense Mechanism Against Network Intrusion
GAO Chun-gang, WANG Yong-jie, XIONG Xin-li
Computer Science. 2022, 49 (7): 324-331.  doi:10.11896/jsjkx.210600193
Abstract PDF(2605KB) ( 5175 )   
References | Related Articles | Metrics
Both moving target defense and cyber deception defense protect their own systems and networks by increasing the uncertainty of information acquired by attackers.They can slow down network reconnaissance attacks to a certain extent.However,a single moving target defense technology cannot prevent attackers who use multiple information to conduct network intrusions.Meanwhile,the deployed decoy node may be identified and marked by the attacker,thereby reducing the defense effectiveness.Therefore,this paper proposes a hybrid defense mechanism combining moving target defense and cyber deception defens.Through in-depth analysis of actual network confrontation,a network intrusion threat model is constructed.Finally,a defense effectiveness evaluation model based on the Urn model is built.In addition,this paper evaluates the defense performance of the proposed hybrid defense method from multiple aspects such as virtual network topology size,deception probability of decoy nodes,IP address randomization period,IP address transfer probability,etc.,and provides reference and guidance for subsequent defense strategy design.
Secure Coordination Model for Multiple Unmanned Systems
LI Tang, QIN Xiao-lin, CHI He-yu, FEI Ke
Computer Science. 2022, 49 (7): 332-339.  doi:10.11896/jsjkx.210600107
Abstract PDF(2179KB) ( 5335 )   
References | Related Articles | Metrics
With the development of unmanned system technology,the coordination control of multiple unmanned systems has been widely concerned.Designing a reasonable coordination model is an important research content of multiple unmanned systems,but existing coordination models have security problems in the open network environment.Based on the analysis of the deficiencies of existing models,a coordination model RCC of multiple unmanned systems with security characteristics is proposed.RCC model introduces logic rules in the coordination process to provide the legality of terminals and the authority control of the communication parties,and ensures the security of multiple unmanned systems.Finally,based on ROS and RCC model,a coordination simulation platform for multiple unmanned vehicles is built,in which experiment scenes of navigation and obstacle avoi-dance are designed.Simulation results verify the practicability and security of RCC model.
Click Streams Recognition for Web Users Based on HMM-NN
FEI Xing-rui, XIE Yi
Computer Science. 2022, 49 (7): 340-349.  doi:10.11896/jsjkx.210600127
Abstract PDF(3544KB) ( 5260 )   
References | Related Articles | Metrics
User behavior profile analysis is one of the key means to realize network intelligence,while click-object recognition is an important basis and foundation for constructing user behavior profile.Most existing works are mainly designed for the system-side,and their limitation is that they can only reflect the behavior characteristics of users in a specific service domain and are not suitable for the network-side detection and management.The main challenge for network-side user behavior analysis is that the network channel at the bottom of protocol stack cannot obtain the information of both application-layer and system-side,and can only rely on IP data flows,which makes it difficult to build an effective network-side user behavior profile.In this paper,a new method of user click-object recognition for intermediate network is proposed.The proposed method combines hidden Markov model(HMM) and neural networks(NN).The HMM framework describes the dynamic behavior of click streams and non-click streams from the perspective of IP flows,while NN is used to establish the relationship between the hidden states of HMMs and complex network behavior characteristics.The attribute of a request sequence is determined by the fitting degree between the sequence and the behavior models.The main advantages of this scheme are that it inherits the parse ability of HMM,and enhances the ability of HMM to describe complex data by the embedding NN.The proposed scheme does not involve the data content carried by IP flows,which makes it suitable for click behavior recognition in network-side encryption and non-encryption scenarios,and effectively solve the challenges faced by network-side user behavior profile analysis.Experimental results based on multiple actual data sets show that the three commonly used evaluation indicators F1,Kappa and AUC exceed 0.91,0.83 and 0.96 respectively.These results indicate that the performance of the proposed scheme is better than that of existing methods.
Frequency Feature Extraction Based on Localized Differential Privacy
HUANG Jue, ZHOU Chun-lai
Computer Science. 2022, 49 (7): 350-356.  doi:10.11896/jsjkx.210900229
Abstract PDF(2455KB) ( 5408 )   
References | Related Articles | Metrics
With the continuous development of information technology in the era of big data,privacy problem has attracted more and more attention.Especially with the increasing popularity of mobile terminals,how to protect users' privacy information while releasing data is a major challenge at present.Previously,academic circle has proposed the center differential privacy technology that relies on a trusted third platform,but the condition that needs a trusted third platform is usually not valid in practical applications.On the basis of center differential privacy,localized differential privacy is further proposed.It can prevent privacy attacks from untrusted third platforms,and it still has a strong defensive effect against privacy attackers with abundant knowledge background.But markets often cater to the needs of service providers as well as users.In order to balance the contradiction between the two,how to accomplish the analysis tasks of service providers is a problem that must be solved.RAPPOR is a good mechanism to accomplish these tasks.It encrypts user data by using two random response mechanisms to ensure the strength of privacy protection.Lasso regression model is used to decrypt the encrypted data to ensure the accuracy of frequency feature extraction.In this paper,RAPPOR algorithm is applied to COVID-19 epidemic information collection,which can obtain real epidemic data while protecting the privacy of respondents.The dataset which includes people diagnosed with COVID-19 in the United States is used to simulate the RAPPOR mechanism and fits the real results to a high degree.RAPPOR algorithm realizes the localized differential privacy technology from theory to application,and effectively protects personal privacy.
Network Security Situation Prediction Based on IPSO-BiLSTM
ZHAO Dong-mei, WU Ya-xing, ZHANG Hong-bin
Computer Science. 2022, 49 (7): 357-362.  doi:10.11896/jsjkx.210900103
Abstract PDF(2573KB) ( 5181 )   
References | Related Articles | Metrics
Aiming at the complex network security situation prediction problem,a network security situation prediction model based on improved particle swarm optimization bidirectional long-short term memory(IPSO-BILSTM) network is proposed to improve the convergence speed and prediction accuracy.Firstly,in view of the lack of real situation value in the data set,a situation value calculation method based on attack influence is adopted for situation prediction.Secondly,to address the problems that particle swarm optimization(PSO) algorithm is prone to fall into local optima and unbalanced search capability,the inertia weights and acceleration factors are improved,and the improved particle swarm optimization(IPSO) algorithm has balanced global and local search capability and faster convergence speed.Finally,IPSO is used to optimize the parameters of bidirectional long short term memory(BiLSTM) network,so as to improve the prediction ability.Experimental results show that the fitting degree of IPSO-BiLSTM can reach 0.994 6,and the fitting effect and convergence speed are better than other models.