Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 49 Issue 5, 15 May 2022
  
Contents
Contents
Computer Science. 2022, 49 (5): 0-0. 
Abstract PDF(5365KB) ( 816 )   
RelatedCitation | Metrics
Computer Graphics & Multimedia
Survey on Few-shot Learning Algorithms for Image Classification
PENG Yun-cong, QIN Xiao-lin, ZHANG Li-ge, GU Yong-xiang
Computer Science. 2022, 49 (5): 1-9.  doi:10.11896/jsjkx.210500128
Abstract PDF(2244KB) ( 1923 )   
References | Related Articles | Metrics
Presently,artificial intelligence algorithms represented by deep learning have achieved advanced results and been successfully used in fields such as image classification,biometric recognition and medical assisted diagnosis by virtue of ultra-large-scale data sets and powerful computing resources.However,due to many restrictions in the actual environment,it is impossible to obtain a large number of samples or the cost of obtaining samples is too high.Therefore,studying the learning algorithm in the case of small samples is the core driving force to promote the intelligent process,and it has also become a current research hot-spot.Few-shot learning is the algorithm to learn and solve the problem under the condition of limited supervision information.Firstly,it describes the reasons why few-shot learning is difficult to generalize from the perspective of machine learning theory.Secondly,according to the design motivation of the few-shot learning algorithm,existing algorithms are classified into three categories:representation learning,data expansion and learning strategy,and their advantages and disadvantages are analyzed.Thirdly,we summarize the commonly used few-shot learning evaluation methods and the performance of existing models in public data sets.Finally,we discuss the difficulties and future research trends of small sample image classification technology to provide re-ferences for future research.
Survey Progress on Image Instance Segmentation Methods of Deep Convolutional Neural Network
HU Fu-yuan, WAN Xin-jun, SHEN Ming-fei, XU Jiang-lang, YAO Rui, TAO Zhong-ben
Computer Science. 2022, 49 (5): 10-24.  doi:10.11896/jsjkx.210200038
Abstract PDF(4948KB) ( 1470 )   
References | Related Articles | Metrics
Image instance segmentation is an important part of image processing and computer vision technology about image understanding.With the development of deep learning and deep convolutional neural network,image instance segmentation method based on deep convolutional neural network has made great progress.Instance segmentation task is actually the combination of target detection and semantic segmentation,which can complete the task of recognizing the target contour in the image at thepixel level.Instance segmentation can not only locate the position of the object in the image,segment all the objects from the pixel level,but also mark different individuals of the same category in the image,which is not only the pixel level segmentation of the image,but also the instance level understanding.Firstly,the reason of image segmentation and the function of deep convolution neural network are described.Then,according to the process and characteristics of image instance segmentation methods,the research progress of image instance segmentation is introduced from two-stage and single-stage perspectives,and the advantages and disadvantages of the two methods are described in detail.Then,the design ideas of region,feature extraction and mask are summarized.In addition,the performance evaluation criteria and common public data sets of image instance segmentation methods are summarized,and on this basis,the segmentation accuracy of mainstream image instance segmentation models is compared and evaluated.Finally,it points out the problems and solutions of the current image instance segmentation,summarizes the development of image instance segmentation and prospects for the future.
Sparse Point Cloud Filtering Algorithm Based on Mask
FENG Lei, ZHU Deng-ming, LI Zhao-xin, WANG Zhao-qi
Computer Science. 2022, 49 (5): 25-32.  doi:10.11896/jsjkx.210600129
Abstract PDF(3761KB) ( 995 )   
References | Related Articles | Metrics
Image-based 3D reconstruction is widely used in practice due to less hardware constraints,lower cost and higher flexibility.Especially for the problems of sparseness and uneven density of the three-dimensional point cloud data generated by the image due to the occlusion between various parts of the object,it has always been a difficulty and hot issue to deal with.In this paper,a mask-based sparse point cloud filtering algorithm is proposed.Firstly,the bounding box of the point cloud is calculated and the grid is adaptively divided according to the sparseness of the point cloud.Secondly,Depth-first search is used to recursively find all customized connected domains composed of grids generated at the first step.Then adaptively calculating the threshold based on the quantized importance index,selecting the connected domains that should be retained based on the adaptive threshold,and defining the set of all retained connected domains as a mask,which is used to describe the global spatial topology information of the sparse point cloud.Finally,points covered by the mask are retained while points of the uncovered area are removed,so as to filter the outliers.This method can handle the point cloud data generated by occlusion and with great differences in spatial density.It can effectively remove outliers in the original three-dimensional point cloud data,while maintaining the detailed information of the point cloud.
Study on Cross-media Information Retrieval Based on Common Subspace Classification Learning
HAN Hong-qi, RAN Ya-xin, ZHANG Yun-liang, GUI Jie, GAO Xiong, YI Meng-lin
Computer Science. 2022, 49 (5): 33-42.  doi:10.11896/jsjkx.210200157
Abstract PDF(2721KB) ( 635 )   
References | Related Articles | Metrics
The semantic similarity between two different media data can not be calculated directly because of the serious heterogeneous gap and semantic gap between them,which affects the implementation and effect of cross media retrieval.Although the common space learning can achieve cross media semantic association and retrieval,the retrieval performance is not satisfied.The main reason is that it uses common feature extraction technology and general classification algorithm to implement semantic correlation and match.Aiming at this problem,the study proposes a novel cross media correlation method called Stacking-DSCM-WR for cross media retrieval between documents and images.WR means that text feature extraction is based on word-embedding technique and the image feature extraction is based on ResNet technique.DSCM means that the deep semantic correlation and match technology is exploited to project data of different modalities into a common subspace.Stacking is a kind of ensemble lear-ning algorithm.It is employed to produce the distribution of text documents and images on the same high-level conceptual semantic space for cross-media retrieval.The experiments are carried out on two smaller cross-media datasets,Wikipedia and Pascal Sentence,and one larger cross-media dataset,INRIA-Websearch,respectively.The results show that the proposed method can effectively extract the features of text and image,and realize the correlation and match of cross media data in high-level semantic space.The comparisons with similar cross media retrieval methods show that the proposed method achieves the best retrieval effect based on MAP metric.
Time Information Integration Network for Event Cameras
XU Hua-chi, SHI Dian-xi, CUI Yu-ning, JING Luo-xi, LIU Cong
Computer Science. 2022, 49 (5): 43-49.  doi:10.11896/jsjkx.210400047
Abstract PDF(2516KB) ( 734 )   
References | Related Articles | Metrics
Event cameras are asynchronous sensors that operate in a completely different way from traditional cameras.Rather than catching pictures at a steady rate,event cameras measure light changes (called events) separately for every pixel.As a sequence,it alleviates the problems of traditional cameras in complex light conditions and scenes where objects move at high speed.With the development of convolutional neural networks,learning-based pattern recognition methods have made great progress in visual tasks such as optical flow estimation and target recognition by converting the output of the event camera into a pseudo-ima-ge representation.However,such methods abandon the temporal correlation between the event streams,so that the texture of the pseudo image is not clear enough,and it is difficult to extract the features.The key to solving this problem lies in how to model relevant information between events in the sample.Therefore,a neural network framework based on event stream partition algorithm is proposed,which explicitly integrates the temporal information of event streams.The framework divides the incoming stream of events into several parts,and a weight distribution network assigns different weights to each piece of the streams.Then,the framework uses convolutional neural network to fuse temporal information and extract advanced features.Finally,the input sample is classified.We thoroughly validate the proposed framework on object recognition.Comparison experiments on N-Caltech101 and N-cars datasets show that the proposed framework has a significant improvement in classification accuracy compared with the most advanced existing algorithms.
Multi-scale Feature Fusion Image Dehazing Algorithm Combined with Attention Mechanism
FAN Xin-nan, ZHAO Zhong-xin, YAN Wei, YAN Xi-jun, SHI Peng-fei
Computer Science. 2022, 49 (5): 50-57.  doi:10.11896/jsjkx.210400093
Abstract PDF(3843KB) ( 755 )   
References | Related Articles | Metrics
Aiming at the problems that traditional image dehazing algorithms are easily restricted by prior knowledge and color distortion,a multi-scale feature fusion image dehazing algorithm combined with attention mechanism is proposed.The algorithm first obtains feature maps of multiple scales through down-sampling operations,and then uses skip connections between feature maps of different scales to connect the feature maps of the encoder part and the decoder part for feature fusion.At the same time,a feature attention module composed of channel attention submodule and pixel attention submodule is added to the network to control the importance of different channels and pixels.This feature attention module allows the network to pay more attention to detailed information and important features,therefore,a better dehazing effect can be achieved.In order to verify the effectiveness of the proposed algorithm,qualitative and quantitative comparative experiments are carried out on the RESIDE dataset with five popular dehazing algorithms.The experimental results show that the proposed algorithm can more completely dehazing,and the preservation of image color is better.At the same time,the average values on the two evaluation indicators PSNR (Peak Signal to Noise Ratio) and SSIM (Structure Similarity) are 28.83 dB and 0.957 5,respectively,which are 2.23 dB and 0.017 2 higher than the second-performance model in the comparison algorithm.Then,qualitative comparison experiments are performed between the proposed algorithm and five comparison algorithms on the MSD data set and real images.The experimental results further prove that the proposed algorithm has good dehazing performance and color retention.
Infrared and Visible Image Fusion Based on Feature Separation
GAO Yuan-hao, LUO Xiao-qing, ZHANG Zhan-cheng
Computer Science. 2022, 49 (5): 58-63.  doi:10.11896/jsjkx.210200148
Abstract PDF(3066KB) ( 646 )   
References | Related Articles | Metrics
Although a pair of infrared and visible images captured in the same scene have different modes,they also have shared public information and complementary private information.A complete fusion image can be obtained by learning and integrating above information.Inspired by residual network,in the training stage,each branch is forced to map a label with global features through the interchange and addition of feature-levels among network branches.What’s more,each branch is encouraged to learn the private features of corresponding images.Directly learning the private features of images can avoid designing complex fusion rules and ensure the integrity of feature details.In the fusion stage,the maximum fusion strategy is adopted to fuse the private features,add them to the learned public features at the decoding layer and finally decode the fused image.The model is trained over a multi-focused data set that is synthesized from the NYU-D2 and tested over the real-world TNO data set.Experimental results show that compared with the current mainstream infrared and visible fusion algorithms,the proposed algorithm achieves better results in subjective effects and objective evaluation indicators.
Interpretability Optimization Method Based on Mutual Transfer of Local Attention Map
CHENG Ke-yang, WANG Ning, CUI Hong-gang, ZHAN Yong-zhao
Computer Science. 2022, 49 (5): 64-70.  doi:10.11896/jsjkx.210400176
Abstract PDF(3182KB) ( 631 )   
References | Related Articles | Metrics
At present,deep learning models have been widely deployed in various industrial fields.However,the complexity and inexplicability of deep learning model have become the main bottleneck of its application in high-risk fields.The most important method is the visual interpretation,in which the attention map is the main representation of the visual interpretation method.The decision area in the sample image can be marked to visually display the decision basis of the model.In the existing visual interpretation methods based on attention map,the single model attention map has the problem of insufficient confidence of visualization interpretability due to the annotation error easily appearing in the annotated region.To solve the above problems,this paper proposes an interpretable optimization method based on the mutual transfer of local attention map,aiming at improving the annotation accuracy of the model attention map and displaying the precise decision area,so as to strengthen the visual interpretable ability for the model decision basis.Specifically,the structure of the intermigration network is constructed by using the lightweight model,the feature maps are extracted and superimposed between the layers of the single model,and the global attention map is divided locally.Pearson correlation coefficient is used to measure the similarity of the corresponding local attention map between the models,and then the local attention map is regularized and transferred combined with the cross-entropy function..Experimental results show that the proposed algorithm significantly improves the accuracy of the model attention map label accuracy.The proposed algorithm achieves an average drop rate of 28.2% and an average increase rate of 29.5%,respectively,and achieves an increase of 3.3% in the average decline rate compared with the most advanced algorithm.The above experiments show that the proposed algorithm can successfully find out the most responsive region in the sample image,rather than being limited to the vi-sual visualization region.Compared with the existing similar methods,the proposed method can more accurately reveal the decision basis of the original CNN model.
Face Recognition Method Based on Edge-Cloud Collaboration
WEI Qin, LI Ying-jiao, LOU Ping, YAN Jun-wei, HU Ji-wei
Computer Science. 2022, 49 (5): 71-77.  doi:10.11896/jsjkx.210300222
Abstract PDF(2417KB) ( 1085 )   
References | Related Articles | Metrics
Face recognition is widely used in daily life such as shopping,security check,travel,payment and work attendance.Face recognition systems need strong computing power and large storage space,so face images that need to be recognized are usually transmitted to the cloud platform through the network.Due to the problems of network coverage,congestion and delay,face re-cognition systems are difficult to meet the needs of actual application,and the user experience is poor.Aiming at the problems in face recognition,a face recognition method based on edge-cloud collaboration is proposed.This method combines the processing ability of cloud computing and the real-time performance of edge computing,so that face recognition systems are not constrained by the network status,and its application is more extensive and the user experience is better.In the cloud,the LResNet feature extraction method is proposed to improve the ResNet34 network structure,and the ArcFace face loss function is used to supervise the training process,so that the network can learn more face angle features.At the edge,due to the limited computing resources and storage resources,a SResNet feature extraction method is proposed.Deep separable convolution is used to lighten the LResNet network structure,and it has greatly reduced network parameters and computation.The face recognition experiment on edge-cloud collaboration shows that the system can recognize faces in real time with a high accuracy rate under any network status.
Human Activity Recognition Method Based on Class Increment SVM
XING Yun-bing, LONG Guang-yu, HU Chun-yu, HU Li-sha
Computer Science. 2022, 49 (5): 78-83.  doi:10.11896/jsjkx.210400024
Abstract PDF(1654KB) ( 694 )   
References | Related Articles | Metrics
Health monitoring based on human activity recognition (HAR) is an important means to discover health abnormalities.However,in daily activity recognition,it is difficult to obtain training samples containing all possible activity categories in advance.When new categories appear in the prediction stage,the traditional support vector machine (SVM) will incorrectly classify them as known category.A robust classifier should be able to distinguish the newly added categories so that they can be processed differently from the known categories.This paper proposes a human activity recognition method based on class increment SVM,and the idea of hypersphere is introduced,which can not only identify known activity categories with high accuracy,but also detect new categories.The multiple hyperspheres obtained through training divide the entire feature space,so that the classifier has the ability to detect newly added activity categories.The experimental results show that compared with the traditional multi-class SVM method,our method can realize the detection of new categories without significantly reducing the classification effect of known categories,thereby improving the classifier's ability to recognize human activity in an open environment.
Real-time Detection Model of Insulator Defect Based on Improved CenterNet
LI Fa-guang, YILIHAMU·Yaermaimaiti
Computer Science. 2022, 49 (5): 84-91.  doi:10.11896/jsjkx.210400142
Abstract PDF(3342KB) ( 496 )   
References | Related Articles | Metrics
Aiming at the problem that it is difficult to detect insulators and their defects in real time and efficiently in the course of electric patrol inspection of UAV,an improved insulator defect detection model based on CenterNet is proposed.Firstly,lightweight network EfficientNet-B0 is used to replace the original model’s feature extraction network ResNet18,which ensures the model extraction ability and speeds up its reasoning speed.Then,a feature enhancement module FEM is built,which distributes the weight of the feature channels after upsampling reasonably and suppresses invalid features.Using FPN (feature pyramid networks) for reference,the shallow and deep features are integrated to enrich the information of feature layer.Secondly,the coordination attention(CA) mechanism,which is a mixture of space and channel,is introduced into CenterNet-Head,which makes the prediction of category and location information more accurate.Finally,Soft-NMSis used to solve the problem of “single target and multiple frames” caused by inaccurate prediction of center points in the process of model detection.Experimental results show that the precision of the improved CenterNet is improved by 11.92%,the speed is increased by 8.95 FPS,and the model size is reduced by 54 MB.Compared with other detection models,the accuracy and speed are improved,which proves the real-time performance and robustness of the improved model.
Identification Method of Voiceprint Identity Based on ARIMA Prediction of MFCC Features
WANG Xue-guang, ZHU Jun-wen, ZHANG Ai-xin
Computer Science. 2022, 49 (5): 92-97.  doi:10.11896/jsjkx.210400071
Abstract PDF(2920KB) ( 735 )   
References | Related Articles | Metrics
The key of vocal pattern recognition technology is to extract the speech feature parameters with representative speaker characteristics from the speech signal.Considering that most of the contemporary determinations are made using the experience of the identifiers,combined with MFCC features,this paper proposes an ARIMA prediction-based vocal identity identification me-thod on the basis of previous study to improve the accuracy of the comparison between the examination materials with year gaps and the samples.This method uses an autoregressive integrated moving average seasonal series based on the Mel inverse spectral coefficient vocalic identity identification method,makes linear least mean square estimation,and improves the resonance peak characteristics containing vowels and loud consonants.It is demonstrated that the prediction results of ARIMA time series are good,and the accuracy of text-independent identity identification based on Mel inverse spectral coefficients using the modified ARIMA is high,with a similarity of more than 60%.
Underwater Image Quality Assessment Based on HVS
LU Ting, HOU Guo-jia, PAN Zhen-kuan, WANG Guo-dong
Computer Science. 2022, 49 (5): 98-104.  doi:10.11896/jsjkx.210100224
Abstract PDF(3340KB) ( 633 )   
References | Related Articles | Metrics
Due to the absorption and scattering effects under water,underwater image often suffers from blurring,low contrast,color casting and so on.The degraded images will decline the accuracy and effectiveness in underwater archaeology,marine ecological research,underwater target detection and tracking.On the other hand,underwater image quality assessment plays a key goal in the development and exploration of the ocean.An effective underwater image quality evaluation system can provide a gui-dance for optimizing underwater enhancement and restoration algorithms and promote the progress of underwater vision.Therefore,it is desire to design an effective and robust algorithm for underwater image quality evaluation.Since the atmospheric image quality evaluation methods don’t consider the characteristics of the water absorption of light,they aren’t suitable for evaluating underwater image quality.Additionally,there are few effective metrics for underwater images quality evaluation up to now.To address this problem,we propose a new no-reference underwater image quality measure containing color index,contrast,and sharpness indexes,dubbed CCS,which has stronger correlation with human subjective perception.These attributes not only are sensitive to the physical characteristics of the water,but also the human visual system (HVS) is sensitive to the changes of the visual properties such as color,contrast,and edge structures.To verify the performance of the proposed CCS,we conduct considerable experiments on a small underwater image dataset comparing with the other four non-reference metrics including CPBD,BRISQUE,UIQM and UCIQE.It can be seen that our CCS metric is higher about 13% than UIQM in terms of RMSE,moreover,is higher above 10% than UIQM and UCIQE in terms of PLCC,SROCC,and KROCC.Experimental results demonstrate that the proposed CCS metric has a higher correlation with subjective evaluations,which can effectively and accurately evaluate the underwater image quality.
Fine-grained Image Classification Based on Multi-branch Attention-augmentation
ZHANG Wen-xuan, WU Qin
Computer Science. 2022, 49 (5): 105-112.  doi:10.11896/jsjkx.210100108
Abstract PDF(3181KB) ( 693 )   
References | Related Articles | Metrics
In order to address the challenges of high intra-class variances and low inter-class variances in fine-grained image classification,a multi-branch attention-augmented convolution neural network is proposed to solve the problem.The pre-trained Inception-V3 network is used to extract basic feature.In order to solve the problem that features are extracted from one part of an object and encourage the network to pay more attention to the discriminative features of different parts,we apply self-constrained attention-wised cropping and self-constrained attention-wised erasing on the central parts of the original images.It also improves the detection accuracy of object locations.Meanwhile,a central regularization loss function is proposed to constrain attention-augmented training process to obtain better attention regions and expand the gap between different classes of images.Comprehensive experiments on three benchmark datasets show that our approach surpasses the state-of-art works.
Database & Big Data & Data Science
Line-Segment Clustering Algorithm for Chemical Structure
ZHU Zhe-qing, GENG Hai-jun, QIAN Yu-hua
Computer Science. 2022, 49 (5): 113-119.  doi:10.11896/jsjkx.210700131
Abstract PDF(2060KB) ( 2601 )   
References | Related Articles | Metrics
Chemical bond recognition is an important sub-task of chemical structure recognition.The single bonds,double bonds and triple bonds of the chemical structure are all composed of line segments,and it is easy to produce redundant data and interfe-rence data when the Hough transform is used for line segment detection.To this end,a clustering algorithm is proposed to cluster the line segments in chemical bonds detected by Hough transform,during which the redundant line segments can be merged dynamically.Specifically,based on the analysis of spatial relationship between the line segments,the relative similarity measure and interval similarity measure between line segments are defined.A clustering method based on the merging of line segments is carried out by using these two measures.Experimental results show that the proposed similarity measures can comprehensively des-cribe the similarity between line segments.The algorithm can obtain good clustering results,and accurately restore the true position of the line segments in the chemical bonds.It is therefore an effective method for chemical structure image preprocessing.
Method on Multi-granularity Data Provenance for Data Fusion
YANG Fei-fei, SHEN Si-yu, SHEN De-rong, NIE Tie-zheng, KOU Yue
Computer Science. 2022, 49 (5): 120-128.  doi:10.11896/jsjkx.210300092
Abstract PDF(3300KB) ( 993 )   
References | Related Articles | Metrics
As the amount of data increases,correlates and crosses between data,the value of data needs to be maximized through data fusion.However,due to the complexity of the data fusion process,to clearly explain the data fusion process,it is necessary to establish a backtracking mechanism for data fusion.Although many researches are focused on data provenance,most of them are based on query and workflow,and few of them are for data fusion.This paper focuses on the provenance of data fusion,and proposes a method to support multi-granularity provenance.Firstly,the data fusion process is abstracted,and the semantic graphs of patterns,entities and attributes are constructed with the entity as the core,and an optimized model for storing storage provenance information is proposed.Secondly,on the basis of the semantic graph,the data provenance query algorithms at the entity level and the attribute level are proposed respectively,and the corresponding query optimization strategy are also proposed.Finally,experiments demonstrate the effectiveness of the proposed data provenance method.
Feature Fusion Framework Combining Attention Mechanism and Geometric Information
DONG Qi-da, WANG Zhe, WU Song-yang
Computer Science. 2022, 49 (5): 129-134.  doi:10.11896/jsjkx.210300180
Abstract PDF(1906KB) ( 544 )   
References | Related Articles | Metrics
The imbalanced problem is common in the real world,and the highly-skewed distribution of imbalanced data seriously affects the performance of the model.In general,the imbalanced data affects the model performance from two aspects.On the one hand,the imbalance in sample size leads to more updates of parameters in majority classes,which leads to the model biased to majority classes.On the other hand,the sample size of minority classes is too small,and the diversity is insufficient,which leads to the insufficient representation ability of the model.To solve these problems,this paper proposes a feature fusion framework combining attention mechanism and geometric information.Specifically,in the first stage,the model learns the semantic information and discriminative information of the data through pre-training,and combines the attention mechanism to discover where the mo-del pays more attention.In the second stage,the model uses geometric information to mine boundary features,and combines the attention weight obtained in the first stage to fuse the boundary features,so as to supplement minority classes.Experimental results on long tail CIFAR10,CIFAR100 and KDD Cup99 datasets show that the proposed feature fusion framework combining attention mechanism and geometric information can effectively improve the classification performance of imbalanced data,and can effectively improve the classification performance for different types of data,including image data and structured data.
XGBoost for Imbalanced Data Based on Cost-sensitive Activation Function
LI Jing-tai, WANG Xiao-dan
Computer Science. 2022, 49 (5): 135-143.  doi:10.11896/jsjkx.210400064
Abstract PDF(5223KB) ( 665 )   
References | Related Articles | Metrics
For binary classification with category imbalance,acost-sensitive activation function XGBoost algorithm(CSAF-XGBoost) is proposed to promote the ability of recognizing minority samples.When XGBoost algorithm constructs decision trees,unbalanced data will affect split point selection,which lead to misclassification of minority.By constructing cost-sensitive activation function (CSAF),samples in different estimation are under different gradient variations,which approach the problem that the gradient variation of misclassified minority sample is too small to make samples be recognized correctly in iterations.The experiments analyze the relation of imbalanced rate (IR) to parameters,and compare performance with SMOTE-XGBoost,ADASYN-XGBoost,Focal loss-XGBoost and Weight-XGBoost on UCI datasets.As for recall rate of minority,CSAF-XGBoost surpasses the best methods 6.75% in average and 15%in maximum with F1-score and AUC score in the same level.The results prove CSAF-XGBoost has better performance in recognizing minority class samples and wider applicability.
Social Trust Recommendation Algorithm Combining Item Similarity
YU Ai-xin, FENG Xiu-fang, SUN Jing-yu
Computer Science. 2022, 49 (5): 144-151.  doi:10.11896/jsjkx.210300217
Abstract PDF(2506KB) ( 498 )   
References | Related Articles | Metrics
With the rapid development of Internet,it is difficult for users to find the content they are interested in from massive network data,while the recommendation system can solve this problem.Traditional recommendation systems only rely on user’s historical behavior data for recommendation,which has the problems of data sparsity and cold start.The integration of social network information into the recommendation system has been proven to effectively solve the problems of the traditional recommendation system and improve the quality of recommendation system.However,most recommendation systems based on social networks only focus on the one-way trust relationships between users,and ignore the influence of the trusted relationship and the item’s own factors on recommendation results.Therefore,a social trust recommendation algorithm,called SocialIS,which combines item similarity,is proposed.The influence of neighbor users on user when the user is truster and trustee is considered by SocialIS,and the Node2vec algorithm is used to train the item similarity vector containing the user’s preference,and then the graph neural network is used to learn the feature vector of the user and the item to predict the score.A large number of experiments are conducted on the Epinions and Ciao data sets,and the performance of the proposed algorithm is measured by error-based indicators (MAE and RMSE),and compared with other algorithms to verify its performance.Experimental results show that compared with other algorithms,the proposed algorithm had smaller scoring prediction error and better recommendation effect.
Efficient Neighborhood Covering Model Based on Triangle Inequality Checkand Local Strategy
CHEN Yu-si, AI Zhi-hua, ZHANG Qing-hua
Computer Science. 2022, 49 (5): 152-158.  doi:10.11896/jsjkx.210300302
Abstract PDF(2772KB) ( 447 )   
References | Related Articles | Metrics
Neighborhood covering model is widely used in classification tasks for its simple mechanism and ability to handle complex data.However,the neighborhood covering model has the problem of low efficiency and lack of related research work.To solve this problem,triangle inequality between distances is introduced to improve the efficiency of constructing neighborhood.Meanwhile,local neighborhood covering is defined.The local strategy is used to improve the efficiency of constructing neighborhood covering.In summary,to improve the efficiency,traditional neighborhood covering model is improved from two perspectives,and a neighborhood covering model based on triangle inequality check and local strategy (TI-LNC) is proposed.In addition,current classification algorithms based on neighborhood covering models only classify samples based on neighborhood centers and neighborhood radius,and ignore the sample information in neighborhoods,which affects classification accuracy.To improve the classification accuracy of the neighborhood covering model,the consideration of sample information in the neighborhood is added,and a new classification algorithm based on TI-LNC is designed.The experimental results on 10 UCI data sets show that the proposed model which is reasonable and effective can achieve higher efficiency and better classification accuracy.
Diversity Recommendation Algorithm Based on User Coverage and Rating Differences
CHEN Zhuang, ZOU Hai-tao, ZHENG Shang, YU Hua-long, GAO Shang
Computer Science. 2022, 49 (5): 159-164.  doi:10.11896/jsjkx.210300263
Abstract PDF(1913KB) ( 510 )   
References | Related Articles | Metrics
Traditional recommender systems usually focus on improving recommendation accuracy while neglecting the diversity of recommendation lists.However,several studies have shown that,users’ diversity needs also take an important part of their sa-tisfaction.In this paper,a user-coverage model based on item rating differences is proposed.During generating user’s interest domain(user coverage),on the one hand,the model combines rating differences between users across an item with user-coverage model effectively,thus obtaining a more precise interest domain of the user.On the other hand,objective function is constructed in the form of vector by mapping a user’s and an itemset’s interest domain to two m-dimensional vectors (called user vector and itemset vector respectively),which can reduce the number of iterations in the calculation process.In addition,a new items selection strategy is provided by similarity relationship between those two m-dimensional vectors.The proposed model has superior performance in both accuracy and diversity.User vector for a specific user is a constant,however,finding the most matching itemset vector will be an NP-hard problem.During the implementation of the proposed model,a greedy algorithm is chosen to solve the optimization problem based on critical theoretical boundary.Experimental comparisons with the state-of-the-arts related to diversity recommendation in recent years on two real-world data sets demonstrate that the proposed algorithm can effectively improve the diversity of the recommendation.
Study on Affinity Propagation Clustering Algorithm Based on Bacterial Flora Optimization
ZHANG Yu-jiao, HUANG Rui, ZHANG Fu-quan, SUI Dong, ZHANG Hu
Computer Science. 2022, 49 (5): 165-169.  doi:10.11896/jsjkx.210800218
Abstract PDF(1927KB) ( 465 )   
References | Related Articles | Metrics
In order to improve the clustering performance of the nearest neighbor propagation clustering algorithm,the flora algorithm is used to optimize the parameters of the nearest neighbor propagation bias.Firstly,the similarity matrix is established according to the samples to be clustered,and the bias parameters are initialized.Secondly,the bias parameters are optimized by flora algorithm,which is used as colony for training,and the Silhouette index value is set as fitness function of flora algorithm.Then,the optimized bias parameters are updated by colony position to perform neighbor propagation clustering operation,and the decision and potential matrix of neighbor propagation clustering algorithm are continuously updated.Finally,stable clustering results are obtained.Experimental results show that better clustering results can be obtained by setting the parameters of flora optimization algorithm reasonably.Compared with common clustering algorithms,the proposed algorithm can obtain higher Silhouette index value and the shortest Euclidean distance performance in e-commerce dataset and UCI dataset,and has high applicability in clustering analysis.
Community Detection Algorithm Based on Dynamic Distance and Stochastic Competitive Learning
WANG Ben-yu, GU Yi-jun, PENG Shu-fan, ZHENG Di-wen
Computer Science. 2022, 49 (5): 170-178.  doi:10.11896/jsjkx.210300206
Abstract PDF(1985KB) ( 451 )   
References | Related Articles | Metrics
Community structure is an important property of complex networks.It is profoundly significant for understanding the organizational structure and functions of complex networks.A community detection algorithm (Dynamic Distance Stochastic Competitive Learning,DDSCL) is proposed to solve the community detection problem of complex networks.DDSCL is based on dynamic distance and stochastic competitive learning.The algorithm first combines node degree values and Euclidean distances between nodes to determine the initial positions of particles in stochastic competitive learning,which will allow different particles to not compete within the same community at the beginning of the wander,speeding up the convergence of the particles.The dyna-mic distance between nodes is then combined with a dynamic distance algorithm to incorporate the dynamic distance between nodes into the particle prioritization walking process.The particle prioritization process is more directional and less random in this way.The particle travel process will also optimize the change in dynamic distance.When the particles reach a convergence state,the node is occupied by the particle that has the most control over it.Each particle in the network eventually corresponds to a community,and the community structure of the network is revealed according to the nodes occupied by each particle.DDSCL is compared in experimental tests on eight real network datasets,and it uses NMI and modularity Q -value as evaluation metrics.It’s found that DDSCL outperforms other algorithms overall.The algorithm first reduces the randomness of preferential walking of particles in stochastic competitive learning.Then DDSCL solves the problem of fragmented communities arising from dynamic distance algorithms,and improves the accuracy of community detection results.The experimental results show the proposed algorithm’s effectiveness.
Artificial Intelligence
Exploration and Exploitation Balanced Experience Replay
ZHANG Jia-neng, LI Hui, WU Hao-lin, WANG Zhuang
Computer Science. 2022, 49 (5): 179-185.  doi:10.11896/jsjkx.210300084
Abstract PDF(2942KB) ( 628 )   
References | Related Articles | Metrics
Experience replay can reuse past experience to update target policy and improve the utilization of samples,which has become an important component of deep reinforcement learning.Prioritized experience replay performs selective sampling based on experience replay to use samples more efficiently.Nevertheless,the current prioritized experience replay methods will reduce the diversity of samples sampled from the experience buffer,causing the neural network to converge to the local optimum.To tackle the above issue,a novel method named exploration and exploitation balanced experience replay (E3R) is proposed to ba-lances exploration and utilization.This method can comprehensively consider the exploration utility and utilization utility of the samples,and sample according to the weighted sum of two similarities.One of them is the similarity between the behavior strategy and the target strategy in the same state of action,and the other is the similarity between the current state and the past state.Besides,the E3R is combined with the policy gradient algorithm soft actor-critic and the value function algorithm deep Q lear-ning,and experiments are carried out on the suite of OpenAI gym tasks.Experimental results show that,compared to traditional random sampling and sequential differential priority sampling,E3R can achieve faster convergence speed and higher cumulative return.
ECG-based Atrial Fibrillation Detection Based on Deep Convolutional Residual Neural Network
ZHAO Ren-xing, XU Pin-jie, LIU Yao
Computer Science. 2022, 49 (5): 186-193.  doi:10.11896/jsjkx.220200002
Abstract PDF(1934KB) ( 709 )   
References | Related Articles | Metrics
In the context of increasing demand for intelligent diagnosis,a convolutional neural network model based on residual network is proposed for ECG(electrocardiogram) signal classification of atrial fibrillation.MIT-BIH atrial fibrillation data is used to verify the effectiveness of the method,and then assist the automatic detection of atrial fibrillation.Aiming at the problem of ECG signal dichotomy,firstly,the atrial fibrillation data set and previous data preprocessing work are introduced.Then,the processed data is input into the deep learning model constructed with convolutional neural network,to automatically extract features of atrial fibrillation from electrocardiogram signals.Finally,the designed deep learning model is used for atrial fibrillation detection.The validation of the method is proved with five cross-validation strategy.Performance of the classification is represented by the sensitivity,specificity,positive predictive value and accuracy,they are 99.26%,99.42%,99.61% and 99.47%,respectively.Then the performance of the proposed model and existing models are compared to confirm that the proposed model is feasible in atrial fibrillation detection.In conclusion,the automatic detection system for atrial fibrillation based on convolutional neural network with residual network can achieve a good classification performance of atrial fibrillation,which can be helpful in automatic atrial fibrillation detection.
Academic Knowledge Graph-based Research for Auxiliary Innovation Technology
ZHONG Jiang, YIN Hong, ZHANG Jian
Computer Science. 2022, 49 (5): 194-199.  doi:10.11896/jsjkx.210400195
Abstract PDF(2406KB) ( 506 )   
References | Related Articles | Metrics
Due to the rapid updating of computer knowledge with many ambiguities,it is difficult for students to seek reasonable solutions for independent innovation.As an auxiliary innovation tool,intelligent question-answering system can help students to grasp the frontier of subject development,find out solutions for problems faster and precisely.In this paper,a knowledge graph of scientific research is constructed based on a large-scale database of scientific and technological documents,which realizes an intelligent question answering system for assisting students in innovation.In order toreduce the influence of noisy entities on query questions,this paper proposes an auxiliary task enhanced intent information for question answering in computer domain(ATEI-QA).Compared with the traditional method,this method can extract the question intention information more accurately and further reduce the influence of noisy entity on intention recognition.Additionally,we conduct a series of experimental studies on computer and common datasets,and compare with three mainstream methods.Finally,experimental results demonstrate that our model achieves significant improvements against with three baselines,improving MAP and MRR scores by average of 3.27%,1.72% in the computer dataset and 4.37%,2.81% in the common dataset respectively.
Dialogue-based Entity Relation Extraction with Knowledge
LU Liang, KONG Fang
Computer Science. 2022, 49 (5): 200-205.  doi:10.11896/jsjkx.210300198
Abstract PDF(1781KB) ( 633 )   
References | Related Articles | Metrics
Entity relation extraction aims to extract semantic relations between entities from text.Up to now,related work on entity relation extraction mainly focuses on written texts,such as news and Wikipedia text,and has achieved considerable success.However,the research for dialogue texts is still in initial stage.At present,the dialogue corpus used for entity relation extraction is small in scale and low in information density,so it is difficult to capture effective features.The deep learning model does not associate knowledge like human beings,so it is difficult to understand the dialogue content in detail and depth simply by increasing the amount of annotation data and enhancing the computing power.In response to the above problems,this paper proposes a knowledge-integrated entity relation extraction model,which uses Star-Transformer to effectively capture features from dialogue texts,and constructs a relation set containing relations and their semantic keywords through the co-occurrence of keywords.The important relation features obtained by calculating the correlation between the set and dialogue text are integrated into the model as knowledge.Experiment results on the DialogRE dataset show that the F1 value is 53.6% and the F1c value is 49.5%,which proves the effectiveness of proposed method.
Construction and Classification of Brain Function Hypernetwork Based on Overlapping Group Lasso with Multi-feature Fusion
LI Peng-zu, LI Yao, Ibegbu Nnamdi JULIAN, SUN Chao, GUO Hao, CHEN Jun-jie
Computer Science. 2022, 49 (5): 206-211.  doi:10.11896/jsjkx.210300049
Abstract PDF(2164KB) ( 466 )   
References | Related Articles | Metrics
The study of brain function hypernetwork plays an important role in the accurate diagnosis of brain diseases.At pre-sent,there are a variety of hypernetwork construction methods used in the classification of brain diseases,but these methods do not take into account the overlap between groups.Studies have shown that the overlap between groups may affect the construction of related hypernetwork models and the classification application after construction.Therefore,if only non-overlapping group structures are used,it will limit its applicability in hypernetwork.Aiming at the hypernetwork construction method that has been applied to the study of brain disease classification,when constructing the hypernetwork model,the problem of partial overlap between groups and the problem of attribute singleness in the feature extraction stage are not considered,a method of overlapping group lasso with multi-feature fusion analysis is proposed.This research method is used in the construction of hypernetwork and applied to the diagnosis of depression.The results show that the classification performance of overlapping group lasso method is better than that of other existing methods in both pure clustering coefficient attribute and multi-feature fusion analysis.Under the overlapping group lasso method,the multi-feature fusion analysis achieves a higher classification accuracy than use the clustering coefficient attribute analysis alone,reaches 87.87%.
Multimodal Multi-objective Optimization Based on Parallel Zoning Search and Its Application
LI Hao-dong, HU Jie, FAN Qin-qin
Computer Science. 2022, 49 (5): 212-220.  doi:10.11896/jsjkx.210300019
Abstract PDF(2110KB) ( 610 )   
References | Related Articles | Metrics
Multimodal multi-objective optimization based on zoning search belongs to a decision space decomposition strategy,thus it has natural parallelism.To improve the solution efficiency,a parallel zoning search (PZS) using the parallel computing technique is proposed in this paper .In the PZS,the entire search space of multimodal multi-objective optimization problem is firs-tly divided into many subspaces,and then a selected multimodal multi-objective evolutionary algorithm is used to independently search each subregion via the parallel computing method.Finally,equivalent solutions are selected from solutions of all subspaces.To verify the effectiveness of the proposed method,two experiments are executed in the current study.The first experiment is that all compared algorithms use the same run time,the other is that all compared algorithms use the same number of function evaluation. The results show that the proposed method can effectively assist the selected multimodal multi-objective evolutionary algorithm in improving the quality of solutions in the decision space under the same calculation time,and can save the computational time under the same number of function evaluations.The multimodal multi-objective evolutionary algorithm combined with PZS is also used to solve the multimodal multi-objective problem of energy consumption of sea-rail intermodal transportation,in which carbon emission is considered.The obtained results can provide decision support for the environmental protection and transportation time issues in the sea-rail intermodal transportation.
Stance Detection Based on User Connection
LI Zi-yi, ZHOU Xia-bing, WANG Zhong-qing, ZHANG Min
Computer Science. 2022, 49 (5): 221-226.  doi:10.11896/jsjkx.210400135
Abstract PDF(1917KB) ( 597 )   
References | Related Articles | Metrics
The main purpose of stance detection is to mine users’ attitude towards topics or events.Different from other text classification tasks,the expression about stance is more obscure,and the attitude is more sensitive to users.The current stance detection methods mainly model the information of topic content itself,which ignores the user background information.However,the information of users and their preferences greatly affects the accurate mining of text information,which enables the potential information characteristics to be obtained through the associated user information.Therefore,this paper proposes a user connection-based stance detection model (USDM),which builds a user connection by constructing a graph of users,and mines similar text stance information under the same user from a global perspective by convolution operation.At the same time,attention mechanism is added to enhance user-aware text representation.The experimental results on the public dataset H&N14 show that the proposed model achieves better performance than other models.Meanwhile,ablation experiments show that user association information and attention mechanism play an important role in improving detection accuracy.
Novel Neural Network for Dealing with a Kind of Non-smooth Pseudoconvex Optimization Problems
YU Xin, LIN Zhi-liang
Computer Science. 2022, 49 (5): 227-234.  doi:10.11896/jsjkx.210400179
Abstract PDF(2064KB) ( 456 )   
References | Related Articles | Metrics
The research of optimization problem is favored by researchers.Nonsmooth pseudoconvex optimization problems are a special kind of nonconvex optimization problems,which often appear in machine learning,signal processing,bioinformatics and various scientific and engineering fields.Based on the idea of penalty function and differential inclusion,a new neural network me-thod is proposed to solve the non-smooth pseudoconvex optimization problems with inequality constraints and equality constraints.Under given assumptions,the solution of the RNN can enter in the feasible region in finite time and stay there there-after,at last converge to the optimal solution set of the optimization problem.Compared with other neural networks,the RNN has the following advantages:1)simple structure,it is a single-layer model;2)it is not need to compute an exact penalty parameter in advance;3)the initial point is chosed arbitrarily.Under the environment of MATLAB,mathematical simulation experiments show that state solution can converge to the optimal solution.At the same time,if the initial points are not selected properly,the state solution will not converge in limit time even can not converge.This not only verifies the effectiveness of the proposed RNN,but also shows that the proposed network has a wider range of applications.
Computer Network
Survey of Hybrid Cloud Workflow Scheduling
LIU Peng, LIU Bo, ZHOU Na-qin, PENG Xin-yi, LIN Wei-wei
Computer Science. 2022, 49 (5): 235-243.  doi:10.11896/jsjkx.210300303
Abstract PDF(2764KB) ( 545 )   
References | Related Articles | Metrics
In the context of data explosion,traditional cloud computing is faced with the dilemma of insufficient local cloud resources and high expansion cost.However,the newly emerging hybrid cloud combining resource-rich public cloud and data-sensitive private cloud has become a research hotspot and application direction at present.As an attractive paradigm,workflow has been increasing in data scale and computing scale.Therefore,workflow scheduling is a key issue in the direction of hybrid cloud research.For this reason,this paper first makes an in-depth investigation and analysis of workflow scheduling technology in hybrid cloud environment,and then classifies and compares workflow scheduling in hybrid cloud environment:for deadline,for cost,for energy-efficient and for multi-objective constraints.On this basis,the future research directions of workflow scheduling in hybrid cloud environment are analyzed and summarized:workflow scheduling based on Serverless platform,workflow scheduling based on edge server network collaboration,cloud native workflow scheduling based on Argo integration,and workflow scheduling based on fog computing fusion.
Scheduling Algorithm for Bag-of-Tasks with Due Date Constraints on Hybrid Clouds
YAN Lei, ZHANG Gong-xuan, WANG Tian, KOU Xiao-yong, WANG Guo-hong
Computer Science. 2022, 49 (5): 244-249.  doi:10.11896/jsjkx.210300120
Abstract PDF(2164KB) ( 442 )   
References | Related Articles | Metrics
Bag-of-Tasks (BoT) applications consisting of multiple tasks are widely used in various fields.Different from the traditional deadline constraint in scheduling problems,a due date constraint allows the BoT applications to finish more than a predetermined due date,but would result in the tardiness penalty.In this case,in order to reduce the total cost,the efficient harmony search (EHS) algorithm is proposed to optimize the scheduling tasks in hybrid clouds.The algorithm obtains the initial task sequence by random search and fine tuning.By improving the harmony search steps and the way of generating new harmony memory,a large number of high-quality harmony can be obtained in one search process,which greatly improves the efficiency of harmony search and speeds up the convergence speed of the algorithm.Through continuous iteration,the global optimal solution is obtained,that is to say,BoT application scheduling scheme has the lowest total cost.Experimental results show that compared with other algorithms,the proposed algorithm has significant improvement in performance,which can effectively reduce the total cost of scheduling BoT applications.
Study on PAPR Reduction Based on Correlation of Chaotic Sequences
ZHAO Geng, WANG Chao, MA Ying-jie
Computer Science. 2022, 49 (5): 250-255.  doi:10.11896/jsjkx.210400292
Abstract PDF(2777KB) ( 425 )   
References | Related Articles | Metrics
After analyzing the main techniques to reduce the peak to average power ratio (PAPR),a partial transmission sequence method (CL-PTS) based on low correlation of chaotic sequences is proposed to solve the problem that the reduction effect is gene-rally not ideal.In this method,several chaotic sequences with low autocorrelation are multiplied by the original signal,and the average instantaneous power of OFDM system is reduced by inverse fast Fourier transform (IFFT).Simulation results show that when the complementary cumulative distribution function (CCDF) is 10-3,the PAPR reduction effect of this method is about 1 dB compared with other similar algorithms,but the algorithm is too complex and consumes more spectrum resources.On this basis,an improved correlation algorithm (CM-PTS) is proposed.This paper analyzes the influence of the number of sub blocks of PTS algorithm on the amount of computation and using the characteristics of IFFT transform,the PAPR can be reduced by changing the insertion position of the sequence in the system.The results show that CM-PTS algorithm can reduce the PAPR value by about 0.5 dB without increasing the BER.
New Hybrid Precoding Algorithm Based on Sub-connected Structure
JIANG Rui, XU Shan-shan, XU You-yun
Computer Science. 2022, 49 (5): 256-261.  doi:10.11896/jsjkx.210300138
Abstract PDF(2433KB) ( 439 )   
References | Related Articles | Metrics
Millimeter wave communication can provide higher spectrum,so that it’s the key technology of 5G network,but it will also have greater road loss.Large scale antenna array and directional beamforming technology can solve this problem effectively.However,with the increase of the number of antennas,the cost of hardware and energy of traditional digital precoder is very high,so hybrid precoder is needed to overcome this difficulty.However,the power consumption of the algorithm is high in the fully-connected structure.Therefore,a new hybrid precoding algorithm based on sub-connected structure is proposed in this paper.Firstly,the transmit antenna array is divided into several independent sub-arrays,then the analog precoding matrix of each sub-array is designed,and the spectral efficiency of each sub-array is optimized in turn to maximize the total spectral efficiency.Finally,based on the obtained analog precoding matrix,the digital precoding matrix is solved by the least square method.Simulation results show that compared with the orthogonal matching pursuit algorithm,the difference between the two algorithms is no more than 3bps/Hz,but the energy efficiency can be improved by 23.8%.Compared with the power iterative algorithm,the spectral efficiency of the proposed algorithm is higher,and the energy efficiency can be improved by 4%.Therefore,the proposed algorithm has good practical application value.
Angle Estimation of Coherent MIMO Radar Under the Condition of Non-uniform Noise
TANG Chao-chen, QIU Hong-bing, LIU Xin, TANG Qing-hua
Computer Science. 2022, 49 (5): 262-265.  doi:10.11896/jsjkx.210300162
Abstract PDF(1993KB) ( 504 )   
References | Related Articles | Metrics
For the problem of angle estimation,the noise in the receiver is usually assumed to be uniform for a multiple input multiple output (MIMO) radar.A non-uniform noise assumption is more realistic. However,The non-uniform noise will result in an unknown noise covariance matrix.If the traditional subspace-based angle estimation methods such as two dimensional multiple signal classification (2D-MUSIC) algorithm is applied directly,the estimation performance will be declined or failed.Therefore,designing new algorithms to estimate the noise covariance matrix and obtain the noise subspace is necessary.Compared with the iterative-based angle estimation algorithms,the non-iterative subspace-based(NIS-based) algorithms can reduce the computational complexity and do not carry out iterative calculation.For this reason,firstly,a one dimensional(1D) NIS-based algorithm under the condition of non-uniform noise is analyzed.Secondly,we extend it to 2D NIS-based angle estimation for the MIMO radar and provide theoretical analysis to verify the feasibility of such an extension.The final simulation results show that the proposed algorithm can obtain the joint direction of departure(DOD) and the direction of arrivals (DOA) of targets and has a good performance in angular accuracy.
Automatic Modulation Recognition Based on Deep Learning
JIAO Xiang, WEI Xiang-lin, XUE Yu, WANG Chao, DUAN Qiang
Computer Science. 2022, 49 (5): 266-278.  doi:10.11896/jsjkx.211000085
Abstract PDF(5456KB) ( 1364 )   
References | Related Articles | Metrics
Automatic modulation recognition (AMR) is critical to realize efficient spectrum sensing,spectrum management and spectrum utilization in non-cooperative communication scenarios.It is also an important prerequisite for efficient signal proces-sing.Traditional AMR methods based on pattern recognition need to extract features manually,which faces many problems such as high design complexity,low recognition accuracy and weak generalization ability.Therefore,practitioners turn to deep learning (DL) methods,which are good at extracting hidden features from the data,and propose a number of AMR-oriented deep neural network (ADNN) architectures.Compared with traditional methods,ADNN has achieved higher recognition accuracy,higher generalization ability and wider application range.This paper provides a comprehensive survey of ADNN to help practitioners understand the current research status in this field,and analyzes the future directions after pinpointing several open issues.Firstly,typical deep learning methods involved in ADNN design are introduced.Secondly,a few traditional AMR methods are briefly described.Thirdly,typical ADNNs are introduced in detail.Finally,a series of experiments are conducted on an open dataset to compare typical proposals,and several key research directions in this field are put forward.
Non-orthogonal Multiple Access and Multi-dimension Resource Optimization in EH Relay NB-IoT Networks
SHEN Jia-fang, QIAN Li-ping, YANG Chao
Computer Science. 2022, 49 (5): 279-286.  doi:10.11896/jsjkx.210400239
Abstract PDF(2926KB) ( 436 )   
References | Related Articles | Metrics
With the rapid development of the narrow band Internet of things (NB-IoT) technology,more and more NB-IoT devices are deployed.However,due to the severe co-channel interference and signal attenuation,it is difficult to guarantee the quality of service of NB-IoT devices at the edge.In order to solve this problem,an energy harvesting (EH) relay-aided non-orthogonal multiple access (NOMA) NB-IoT networking model is proposed in this paper.Based on the proposed model,we aim at maximizing the NB-IoT device data rate based proportional fairness through jointly optimizing transmission power,data scheduling and time slot scheduling,to optimize the network performance while ensuring the data rate requirements of each NB-IoT device.By exploring the convexity of this optimization problem,an optimal multi-dimensional resource allocation algorithm based on the KKT conditions is proposed.Simulation results verify the effectiveness of the proposed algorithm,and show the proposed algorithm can efficiently improve the data rate based proportional fairness with 11.9%,the spectral efficiency with 55.4% and the energy efficiency with 44.1%.
Information Security
Development and Application of Blockchain Cross-chain Technology
SUN Hao, MAO Han-yu, ZHANG Yan-feng, YU Ge, XU Shi-cheng, HE Guang-yu
Computer Science. 2022, 49 (5): 287-295.  doi:10.11896/jsjkx.210800132
Abstract PDF(1669KB) ( 2299 )   
References | Related Articles | Metrics
With the continuous development and innovation of blockchain technology,a large number of blockchain-based applications have been generated.Today’s blockchain systems are mostly heterogeneous and not interconnected,and there is no direct value circulation between chains.This greatly limits the functional expansion and development of blockchains.Cross-chain technology refers to the exchange of information between different blockchain system instances and the use of the exchanged information to achieve interconnection and value transfer between blockchains.Firstly,this paper introduces the development process of blockchain cross-chain technology,and introduces in detail four current mainstream cross-chain technologies including the notary mechanism,the side chain relay,the hash lock and the distributed private key control.Then,based on the above-mentioned cross-chain technology,it further introduces several current mainstream cross-chain projects and applications.Finally,by comparing the similarities and differences of several cross-chain technologies in trust model,security,atomicity,scalability,etc.,the current development trend of cross-chain technology is summarized and analyzed,and the difficulties and future development direction in the field of cross-chain are discussed.
Overview of Side Channel Analysis Based on Convolutional Neural Network
LIU Lin-yun, CHEN Kai-yan, LI Xiong-wei, ZHANG Yang, XIE Fang-fang
Computer Science. 2022, 49 (5): 296-302.  doi:10.11896/jsjkx.210300286
Abstract PDF(2804KB) ( 572 )   
References | Related Articles | Metrics
The profiled side-channel analysis method can effectively attack the implementation of cryptographic,and the side-channel cryptanalysis method based on convolutional neural network (CNNSCA) can efficiently carry out cryptographic attacks,and even can attack the implementation of protected encryption algorithms.In view of the current research status of side-channel cryptanalysis profiling methods,this paper compares and analyzes the characteristics and performance differences of several CNNSCA models,and focuses on the typical CNN model structure and side-channel signal public data set ASCAD.Through model comparison and experimental results,it compares and analyzes the effects of different CNN network modeling methods,and then analyzes the performance factors that affect the CNNSCA method and the advantages of the side-channel profiling method based on convolutional neural networks.Research and analysis show that CNNSCA based on VGG variants performs best in generalization and robustness when attacking target data sets in various situations,but whether the training level of the used CNN model and the hyperparameter settings are most suitable for SCA scenarios have not been verified.In the future,researchers can improve the classification accuracy and decryption performance of CNNSCA by adjusting various hyperparameters of the CNN model,use data enhancement techniques and combine the excellent CNN network in the Imagenet competition to explore the most suitable CNN model for SCA scenarios,which is a development trend.
Review of Privacy-preserving Mechanisms in Crowdsensing
LI Li, HE Xin, HAN Zhi-jie
Computer Science. 2022, 49 (5): 303-310.  doi:10.11896/jsjkx.210400077
Abstract PDF(2308KB) ( 1141 )   
References | Related Articles | Metrics
In recent years,the rapid popularity of intelligent terminals has greatly promoted the development of crowdsensing service paradigm,which integrates data collection,analysis and processing.As a necessary base to ensure the safe operation of services and encourage the participation of sensing users,privacy-preserving has become the primary issue to be solved.This paper presents the state-of-the-art in privacy-preserving mechanisms for crowdsensing service.After describing its main components,this paper discusses the definition and metrics of privacy-preserving from the view of crowdsensing’s whole life cycle.The privacy-preserving mechanisms designed in literatures are analyzed and discussed according to different stages in crowdsensing’s whole-life-cycle,and the experimental datasets used in literatures are given.Finally,Future research challenges are proposed based on the development of crowdsensing and global regulatory requirements for privacy-preserving.
Quantum Voting Protocol Based on Quantum Fourier Transform Summation
FENG Yan, WANG Rui-cong
Computer Science. 2022, 49 (5): 311-317.  doi:10.11896/jsjkx.210300058
Abstract PDF(1650KB) ( 515 )   
References | Related Articles | Metrics
To solve the problem that the user information is easy to be stolen in the traditional electronic voting,and the existing quantum voting generally has low computational efficiency,a quantum voting protocol based on the combination of quantum Fourier transform summation and vector coding is proposed.In the protocol,each particpant uses quantum Fourier transform to entangle their secret values into the hands of the initiator in the form of a single particle state to vote,and the secret ranking of candidates’ votes is realized by vector coding,finally the winner announces the number of votes and ranking.The correctness of the quantum Fourier transform summation in the protocol is verified by the quantum computing simulator provided by IBM.Through theoretical analysis,it is proved that the protocol has good security against four kinds of attacks which are intercept-resend attack,entanglement measurement attack,collusion attack,monitor and candidate attack,and the efficiency of the protocol is higher than the existing quantum voting schemes of the same type.
Testcase Filtering Method Based on QRNN for Network Protocol Fuzzing
HU Zhi-hao, PAN Zu-lie
Computer Science. 2022, 49 (5): 318-324.  doi:10.11896/jsjkx.210300281
Abstract PDF(1954KB) ( 711 )   
References | Related Articles | Metrics
At present,targets of network protocol fuzzing tend to be large protocol entities,and traditional testcase filtering me-thods are mainly based on the running status information of the test object.The larger the test object,the longer it takes to execute a single testcase.Therefore,in view of the problems of long invalid execution time and low efficiency in traditional testcase filtering methods for network protocol fuzzing,a testcase filtering method based on QRNN for network protocol fuzzing is proposed according to strong abilities of recurrent neural network models to process and predict sequence data.The method can automatically filter invalid testcases by learning structural characteristics of the network protocol,including the value range of fields and constraint relationships between fields,and reduce the number of testcases executed by the protocol entity.Experimental results show that,compared with traditional testcase filtering methods for network protocol fuzzing,the proposed method can effectively reduce the time cost of network protocol vulnerability discovery and dramatically improve the efficiency of network protocol fuzzing.
Study on Crowdsourced Testing Intellectual Property Protection Technology Based on Blockchain and Improved CP-ABE
YANG Zhen, HUANG Song, ZHENG Chang-you
Computer Science. 2022, 49 (5): 325-332.  doi:10.11896/jsjkx.210900075
Abstract PDF(3360KB) ( 859 )   
References | Related Articles | Metrics
Crowdsourced testing is the application of crowdsourcing in software testing.It performs testing in a distributed and collaborative way,which has received widespread attention.However,the open crowdsourced testing environment and centralized storage mode put the intellectual property at risk of leakage and being manipulated.In order to realize the privacy protection and trusted storage of crowdsourced testing intellectual property,this paper proposes corresponding protection methods for different intellectual property.For the test tasks and the tested codes,AES and the improved CP-ABE algorithm are used for fine-grained data access control.By outsourcing complex encryption operations to the trusted third party,the proposed algorithm reduces the computing cost of requester.Meanwhile,it also supports the dynamic attribute revocation and meets the forward and backward security.In addition,using blockchain and InterPlanetary File System (IPFS),a distributed trusted storage and consistency right confirmation method of large-scale intellectual property data is realized.It protects intellectual property data from tampering and helps to solve intellectual property disputes.Finally,the performance test and comparative evaluation verify that the encryption and decryption efficiency of this method is improved compared with previous methods,and the blockchain has a high perfor-mance.Security analysis shows that the scheme meets the security requirements.
Quantum Secured-Byzantine Fault Tolerance Blockchain Consensus Mechanism
REN Chang, ZHAO Hong, JIANG Hua
Computer Science. 2022, 49 (5): 333-340.  doi:10.11896/jsjkx.210400154
Abstract PDF(2505KB) ( 695 )   
References | Related Articles | Metrics
Aiming at the problem that the classical blockchain consensus mechanism is under the threat of quantum computing attacks,a quantum-secured Byzantine fault tolerant consensus mechanism is proposed.Firstly,to solve the security threat of public key digital signature,this paper proposes a multilinear hash-unconditionally secure signature(MH-USS) signature scheme based on quantum key distribution (QKD) and multilinear hash function family.In this scheme,quantum keys are distributed through QKD network,messages and signatures are transmitted through classical network,and the simplified USS signature scheme is adopted as the main framework,combined with the family of multiple linear hash functions,to generate a new USS scheme.This signature scheme has the characteristics of unforgeability,non-repudiation and transferability.Moreover,this scheme can be implemented on existing equipment and has high practical value.Secondly,in view of the relatively low consensus efficiency of the classical Byzantine fault-tolerant consensus mechanism PBFT,this paper proposes the quantum secured-byzantine fault tolerance(QS-BFT) consensus mechanism.By adding “fast-normal” consensus mode and allowing nodes to vote on empty blocks,the system communication times are reduced and the view conversion process is avoided.It has been proved that this scheme not only guarantees the safety and liveness,but also effectively reduces message complexity and improves consensus efficiency.The simulation and performance test for this scheme indicate that the throughput of this scheme is higher and the delay is lower compared with the PBFT consensus mechanism which is based on the MH-USS signature scheme.
NTRU Type Fully Homomorphic Encryption Scheme over Prime Power Cyclotomic Rings
QIN Xiao-yue, HUANG Ru-wei, YANG Bo
Computer Science. 2022, 49 (5): 341-346.  doi:10.11896/jsjkx.210300089
Abstract PDF(1490KB) ( 668 )   
References | Related Articles | Metrics
Full homomorphic encryption (FHE) supports arbitrary computation on the ciphertext without the requirement of decryption,which provides protection for privacy security in cloud computing.However,the current FHE scheme constructed using the approximate eigenvector method requires complex matrix multiplications,which is computationally complicated and cannot resist subfield attacks.In this paper,a new FHE scheme was proposed by using the power-of-prime cyclotomic ring instead of a power-of-two cyclotomic ring,and the complex matrix multiplications in homomorphic multiplications were effectively avoided by modifying the ciphertext form and decryption structure.Compared with similar schemes,the proposed scheme improves the efficiency at least by a factor of lφ(x)/2d and is secure against IND-CPA attacks.
Interdiscipline & Frontier
Modified Social Force Model Considering Pedestrian Characteristics and Leaders
LIN Jin-cheng, JI Qing-ge, ZHONG Zhen-wei
Computer Science. 2022, 49 (5): 347-354.  doi:10.11896/jsjkx.210500144
Abstract PDF(2827KB) ( 488 )   
References | Related Articles | Metrics
Social force model is a classic model in crowd movement simulation.The model expresses the subjective wishes of pedestrians and the interaction between pedestrians in the form of “force”.The model is concise and easy to explain.However,there are many factors that affect pedestrian movement,and the calculation of self-driving force and social psychological force in the primitive social force model is insufficient.In order to enable the model to simulate the real movement process,many researchers have improved the social force model.This paper mainly studies the subject in the process of crowd evacuation,pedestrians.Pedestrians are modeled from two aspects:pedestrian characteristics and pedestrian roles.The characteristics of pedestrians include the social relationship between pedestrians,the personality of pedestrians and individual emotions.The degree of interference between pedestrians with different levels of intimacy is different,and pedestrian emotions will also affect the judgment of pedes-trians.The role of pedestrians considers leaders and ordinary pedestrians,and analyzes the impact of different pedestrian roles on the evacuation process.Leaders can help ordinary pedestrians to evacuate.Crowd self-organization simulation experiments verifies that the improved model can simulate the real crowd evacuation situation and retain the advantages of the original model.At the same time,the evacuation efficiency and exit utilization rate of the crowd under four simulation models are counted,and the average value and distribution of the experimental data are analyzed.Experimental results show that the main reasons for the long evacuation time are the time-consuming search for exits and the unbalanced utilization rate of exits.Generally,pedestrian characteristics and leaders have a positive impact on pedestrian evacuation efficiency.Pedestrian characteristics can accelerate pedestrian aggregation and optimize pedestrian expectation speed.On the basis of helping pedestrians to find exits,leaders can balance pedestrian’s use of exits,and ensure that thenumber of evacuees at each exit is basically the same.
Deep Neural Network Operator Acceleration Library Optimization Based on Domestic Many-core Processor
GAO Jie, LIU Sha, HUANG Ze-qiang, ZHENG Tian-yu, LIU Xin, QI Feng-bin
Computer Science. 2022, 49 (5): 355-362.  doi:10.11896/jsjkx.210500226
Abstract PDF(3325KB) ( 654 )   
References | Related Articles | Metrics
Operator acceleration libraries based on different hardware devices have become an indispensable part of deep learning framework,which can provide performance improvement for large-scale training or inference tasks dramatically.The current main-stream operator libraries are all developed based on GPU architecture,which is not compatible with other heterogeneous designs.SWDNN operator library is based on the development of SW26010 processor,which can not give full play to the performance of the upgraded SW26010 pro processor,nor can it meet the needs of the current large neural network models such as GPT-3 for large memory capacity and high memory access bandwidth.According to the architecture characteristics of SW26010 pro processor and the training requirements of large neural network model,a three-level parallel and neural network operator task sche-duling scheme based on multi-core group is proposed,which can satisfy the memory requirements of large model training and improve the overall computing performance and parallel efficiency.A memory access optimization method with triple asynchronous flow and overlap of computation and memory access is proposed,which significantly alleviates the memory access performance bottleneck of neural network operators.Based on the above methods,this paper constructs the SWTensor many-core group operator acceleration library based on the SW26010 pro processor.The experimental results of natural language processing model GPT-2 show that,computation-intensive operators and memory access intensive operators in SWTensor operator library reach the maxi-mum of 90.4% and 88.7% of the theoretical peak values respectively in single-precision floating-point computing performance and memory access bandwidth.
Parallelization and Locality Optimization for Red-Black Gauss-Seidel Stencil
JI Ying-rui, YUAN Liang, ZHANG Yun-quan
Computer Science. 2022, 49 (5): 363-370.  doi:10.11896/jsjkx.220100119
Abstract PDF(2233KB) ( 734 )   
References | Related Articles | Metrics
Stencil is a common cyclic nested computing model,which is widely used in many scientific and engineering simulation applications,such as computational electromagnetism,weather simulation,geophysics,ocean simulation and so on.With the deve-lopment of modern processor architecture,the multi-core and multi-layer memory levels have been deepened.Research on paralle-lism and locality is the main way to improve the performance of programs.Blocking is one of the main techniques to exploit data locality and program parallelism.At present,a large number of blocking methods have been proposed for Stencil,but most of them are limited to Jacobi Stencils which is featured with high parallelism and locality.Gauss-Seidel Stencil has a better convergence rate and is widely used in multi-grid calculations.However,the data dependence of this type of Stencil is more complicated.In this paper,a parallel blocking and vectorization algorithm is designed for Gauss-Seidel Stencil for red black sorting,which improves the data locality,medium granularity multi-core parallelism and intra core fine-grained parallelism of Gauss-Seidel Stencil.Experimental results demonstrate the effectiveness of this scheme.
Participant Selection Strategies Based on Crowd Sensing for River Environmental Monitoring
LI Xiao-dong, YU Zhi-yong, HUANG Fang-wan, ZHU Wei-ping, TU Chun-yu, ZHENG Wei-nan
Computer Science. 2022, 49 (5): 371-379.  doi:10.11896/jsjkx.210200005
Abstract PDF(3567KB) ( 632 )   
References | Related Articles | Metrics
The surrounding environment of rivers in city is often damaged and polluted.How to effectively monitor rivers has gradually attracted the attention of public,government and academia.At present,traditional monitoring methods are facing with high cost,insufficient coverage and other defects.With the increasing popularity of intelligent mobile devices,a new idea of using crowd sensing to efficiently monitor the river environment is proposed in this paper.The problem can be described as the assumption that each river reach contains c monitoring points,and then r users are selected according to the movement tracks of a large number of users to jointly complete the monitoring of all river reaches in s periods.It is stipulated that the smaller the number of users r,the less the monitoring cost.The stepwise-greedy strategy,the global-greedy strategy and the integer-programming stra-tegy are designed to solve this problem,that is,to select the least number of participants to achieve the monitoring goal of “s durations-c ranges-r users”.In this paper,the above strategies are applied to environmental monitoring of some rivers in Taijiang,Fuzhou.Experimental results show that the above strategies can obtain better solutions than the random strategy,and the integer-programming strategy has the best performance.However,with the increase of the scale of the problem,the implicit enumeration algorithm used to solve the small-scale integer programming will be unable to solve the situation.Motivated by this,this paper designs a discrete particle swarm optimization algorithm based on greedy initialization(GI-DPSO).Although this algorithm can solve large-scale integer programming,it is time-consuming.Considering the monitoring cost and computational cost comprehensively,it is suggested that the integer-programming strategy can be adopted for small-scale problems and the global-greedy strategy can be adopted for large-scale problems.