Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 51 Issue 2, 15 February 2024
  
Discipline Frontier
Multi-source Heterogeneous Data Fusion Technologies and Government Big Data GovernanceSystem
YAN Jiahe, LI Honghui, MA Ying, LIU Zhen, ZHANG Dalin, JIANG Zhouxian, DUAN Yuhang
Computer Science. 2024, 51 (2): 1-14.  doi:10.11896/jsjkx.221200075
Abstract PDF(5885KB) ( 2632 )   
References | Related Articles | Metrics
With the rapid development of information technology,the data held by governments and enterprises are growing exponentially.However,the multi-source of data will lead to different formats,the low quality of data will affect the application results,the decentralized management of data will weaken integration services,and the heterogeneous modal of data will cause semantic gaps.Under this background,multi-source heterogeneous data fusion is responsible for effectively integrating multi-modal data from different sources,and then achieve information complementarity and data association,thus realizing information enhancement.At present,most studies focus on big data governance process and multi-modal deep learning,there are few works discuss integral multi-source heterogeneous data fusion framework.Therefore,based on reviewing the key technologies,this paper proposes the key technologies framework of multi-source heterogeneous data fusion that covering the processes of “data collection-data cleaning-data integration-data fusion”,and introduces the problems and tasks of each stage.Then,through an example of the government affairs application,the data governance system for government data is designed,which further explains the signi-ficance of multi-source heterogeneous data fusion.In the end,this paper is summarized and future work is prospected.
Review of Public Opinion Dynamics Models
LIU Shuxian, XU Huan, WANG Wei, DENG Le
Computer Science. 2024, 51 (2): 15-26.  doi:10.11896/jsjkx.230100072
Abstract PDF(1813KB) ( 2539 )   
References | Related Articles | Metrics
Social network provides a medium for information dissemination,leading to the rapid development of public opinion.Controlling the development direction of public opinion is one of the core issues of public opinion dynamics.However,the public opinion dynamics model mainly studies the way of updating the opinions of the subject so as to deduce the law of public opinion evolution.This paper classifies the current public opinion dynamics models,analyzes their advantages and disadvantages,and their applications in different fields,and summarizes the future research direction of public opinion dynamics.It is helpful to understand the law of the evolution of public opinion,so as to provide better guidance for the government and other institutions to control the direction of public opinion.
Database & Big Data & Data Science
MMOS:Memory Resource Sharing Methods to Support Overselling in Multi-tenant Databases
XU Haiyang, LIU Hailong, YANG Chaoyun, WANG Shuo, LI Zhanhuai
Computer Science. 2024, 51 (2): 27-35.  doi:10.11896/jsjkx.231000141
Abstract PDF(3501KB) ( 1067 )   
References | Related Articles | Metrics
This paper presents an oversold memory resource sharing method for multi-tenant databases in an online analysis and processing scenario.The current static resource allocation strategy,which assigns a fixed resource quota to each tenant,leads to suboptimal resource utilization.To enhance resource utilization and platform revenue,it is important to share unused free resources among tenants without impacting their performance.While existing resource sharing methods for multi-tenant databases primarily focus on CPU resources,there is a lack of memory resource sharing methods that support overselling.To address this gap,the paper introduces a novel approach MMOS that accurately forecasts the memory requirements interval of each tenant and dynamically adjusts their resource allocation based on the upper limit of the interval.This allows for efficient management of free memory resources,enabling support for more tenants and achieving memory overselling while maintaining optimal performance.Experimental results demonstrate the effectiveness of the proposed method in dynamically changing tenant load scenarios.With different resource pools,the number of supported tenants can be increased by 2~2.6 times,leading to a significant increase in peak resource utilization by 175%~238%.Importantly,the proposed method ensures that the business and performance of each tenant remain unaffected.
Multivariate Time Series Classification Algorithm Based on Heterogeneous Feature Fusion
QIAO Fan, WANG Peng, WANG Wei
Computer Science. 2024, 51 (2): 36-46.  doi:10.11896/jsjkx.230100135
Abstract PDF(3986KB) ( 1230 )   
References | Related Articles | Metrics
With the advance of big data and sensors,multivariable time series classification has been an important problem in data mining.Multivariate time series are characterized by high dimensionality,complex inter-dimensional relations,and variable data forms,which makes the classification methods generate huge feature spaces,and it is difficult to select discriminative features,resulting in low accuracy and hindering the interpretability.Therefore,a multivariate time series classification algorithm based on heterogeneous feature fusion is proposed in this paper.The proposed algorithm integrates time-domain,frequency-domain,and interval-based features.Firstly,a small number of representative features of different types are extracted for each dimension.Then,features of all dimensions are fused by multivariable feature transformation to learn the classifier.For univariate feature extraction,the algorithm generates different types of feature candidates based on tree structure,and then a clustering algorithm is designed to aggregate redundant and similar features to obtain a small number of representative features,which effectively reduces the number of features and enhances the interpretation of the method.In order to verify the effectiveness of the algorithm,expensive experiments are conducted on the public UEA dataset,and the proposed algorithm is compared with the existing multivariate time series classification methods.The results prove that the proposed algorithm is more accurate than the comparison methods,and the feature fusion is reasonable.What’s more,the interpretability of classification results is showed by case study.
Fusion Model of Housekeeping Service Course Recommendation Based on Knowledge Graph
ZOU Chunling, ZHU Zhengzhou
Computer Science. 2024, 51 (2): 47-54.  doi:10.11896/jsjkx.221200149
Abstract PDF(3638KB) ( 1068 )   
References | Related Articles | Metrics
Housekeeping service practitioners’ demand for online learning of housekeeping service courses has increased.How-ever,the existing online learning websites of housekeeping service courses have few resources,insufficient systematic courses and no course recommendation function,which makes the threshold of online learning for housekeeping service practitioners become higher.Based on the analysis of the existing online learning websites of housekeeping service courses,this paper proposes to construct the knowledge graph of housekeeping service courses,and integrates the knowledge graph of housekeeping service courses with the recommendation algorithm,and designs an R-RippleNet recommendation model for housekeeping service courses that combines the rules of deep learning technology and the water-wave preference propagation.The objects used by R-RippleNet model include old students and new students.The old students make course recommendation based on the water wave preference propagation model,while the new students make course recommendation based on the rule model.Experimental results show that the AUC value of old trainees using R-RippleNet model is 95%,ACC value is 89%,F1 value is 89%,the mean of the overall accuracy rate of new trainees using R-RippleNet model is 77%,the mean of NDCG is 93%.
Knowledge Graph and User Interest Based Recommendation Algorithm
XU Tianyue, LIU Xianhui, ZHAO Weidong
Computer Science. 2024, 51 (2): 55-62.  doi:10.11896/jsjkx.221200169
Abstract PDF(2466KB) ( 1042 )   
References | Related Articles | Metrics
In order to solve the problems of cold start and data sparsity in the collaborative filtering recommendation algorithm,the knowledge graph with rich semantic information and path information is introduced in this paper.Based on its graph structure,the recommendation algorithm which applies graph neural network to knowledge graph is favored by researchers.The core of the recommendation algorithm is to obtain item features and user features,however,research in this area focuses on better expressing item features and ignoring the representation of user features.Based on the graph neural network,a recommendation algorithm based on knowledge graph and user interest is proposed.The algorithm constructs user interest by introducing an independent user interest capture module,learning user historical information and modeling user interest,so that it is well represented in both users and items.Experimental results show that on the MovieLens dataset,the recommendation algorithm based on knowledge graph and user interest realizes the full use of data,has good results and promotes the accuracy of recommendation.
Time Series Clustering Method Based on Contrastive Learning
YANG Bo, LUO Jiachen, SONG Yantao, WU Hongtao, PENG Furong
Computer Science. 2024, 51 (2): 63-72.  doi:10.11896/jsjkx.221200038
Abstract PDF(4208KB) ( 1039 )   
References | Related Articles | Metrics
It is difficult to intuitively define the similarity between time series by deep clustering methods which rely heavily on complex feature extraction networks and clustering algorithms.Contrastive learning can define the interval similarity of time series from the perspective of positive and negative sample data and jointly optimize feature extraction and clustering.Based on the contrastive learning,this paper proposes a time series clustering model that does not rely on complex representation networks.In order to solve the problem that the existing time series data enhancement methods cannot describe the transformation invariance of time series,this paper proposes a new data enhancement method that captures the similarity of sequences while ignoring the time domain characteristics of data.The proposed clustering model constructs positive and negative sample pairs by setting diffe-rent shape transformation parameters,learns feature representation,and uses cross-entropy loss to maximize the similarity of positive sample pairs and minimize negative sample pairs at the instance-level and cluster-level comparison.The proposed model can jointly learn feature representation and cluster assignment in end-to-end fashion.Extensive experiments on 32 datasets in UCR show that the proposed model can obtain equal or better performance than existing methods without relying on a specific representation learning network.
Logical Regression Click Prediction Algorithm Based on Combination Structure
GUO Shangzhi, LIAO Xiaofeng, XIAN Kaiyi
Computer Science. 2024, 51 (2): 73-78.  doi:10.11896/jsjkx.230100052
Abstract PDF(2183KB) ( 964 )   
References | Related Articles | Metrics
With the rapid development of the Internet and advertising platforms,in the face of massive advertising information,in order to improve the user click rate,an improved logical regression click prediction algorithm,logical regression of combination structure(LRCS) based on composite structure is proposed.The algorithm is based on different types of features,which may have different audiences.First,FM is used to combine features to generate two types of combined features.Secondly,a kind of feature combination is used as clustering algorithm for clustering.Finally,another type of feature combination is input into the segmented GBDT+logical regression combination model generated by clustering for prediction.Through multi angle verification in two public datasets,and compared with other commonly used click prediction algorithms,it shows that LRCS has a certain performance improvement in click prediction.
Fuzzy Systems Based on Regular Vague Partitions and Their Approximation Properties
PENG Xiaoyu, PAN Xiaodong, SHEN Hanhan, HE Hongmei
Computer Science. 2024, 51 (2): 79-86.  doi:10.11896/jsjkx.221100229
Abstract PDF(1930KB) ( 992 )   
References | Related Articles | Metrics
This paper is devoted to investigating the approximation problem of fuzzy systems based on different fuzzy basis functions.Firstly,the multi-dimensional regular vague partitions are established based on one-dimensional regular vague partitions and overlap functions,and the fuzzy systems are designed by taking the elements in the partition as the fuzzy basis functions.With the help of the Weierstrass approximation theorem,the conclusion that the fuzzy systems are universal approximators is obtained,and the corresponding approximation error bounds are presented.Secondly,this paper proposes the polynomial,exponential and logarithmic fuzzy systems,and gives their approximation error bounds with the parameters of membership functions.Finally,experiments are designed to compare the approximation capability of different fuzzy systems.Experimental results further verify the correctness of the theoretical analysis.
Label Noise Filtering Framework Based on Outlier Detection
XU Maolong, JIANG Gaoxia, WANG Wenjian
Computer Science. 2024, 51 (2): 87-99.  doi:10.11896/jsjkx.221100264
Abstract PDF(6215KB) ( 1034 )   
References | Related Articles | Metrics
Noise is an important factor affecting the reliability of machine learning models,and label noise has more decisive in-fluence on model training than feature noise.Reducing label noise is a key step in classification tasks.Filtering noise is an effective way to deal with label noise,and it neither requires estimating the noise rate nor relies on any loss function.However,most filtering algorithms may cause overcleaning phenomenon.To solve this problem,a label noise filtering framework based on outlier detection is proposed firstly,and a label noise filtering algorithm via adaptive nearest neighbor clustering(AdNN) is then presented.AdNN transforms the label noise detection into the outlier detection problem.It considers samples in each category separately,and all outliers will be identified.Samples belong to outliers will be ignored according to relative density,and real label noise belong to outliers will be found and removed by defined noise factor.Experiments on some synthetic and benchmark datasets show that the proposed noise filtering method can not only alleviate the overcleaning phenomenon,but also obtain good noise filtering effect and classification prediction performance.
Non-negative Matrix Factorization Parallel Optimization Algorithm Based on Lp-norm
HUANG Lulu, TANG Shuyu, ZHANG Wei, DAI Xiangguang
Computer Science. 2024, 51 (2): 100-106.  doi:10.11896/jsjkx.230300040
Abstract PDF(2443KB) ( 981 )   
References | Related Articles | Metrics
Non-negative matrix factorization algorithm is an important tool for image clustering,data compression and feature extraction.Traditional non-negative matrix factorization algorithms mostly use Euclidean distance to measure reconstruction error,which has shown its effectiveness in many tasks,but still has the problems of suboptimal clustering results and slow convergence.To solve these problems,the loss function of non-negative matrix factorization is reconstructed by Lp-norm to obtain better clustering results by adjusting the coefficient p.Based on the collaborative optimization theory and Majorization-Minimization algorithm,this paper uses the particle swarm optimization to solve the non-negative matrix factorization problem of reconstruction in parallel.The feasibility and effectiveness of the proposed method is verified in real datasets,and the experimental results show that the proposed algorithm significantly improves program execution efficiency and outperforms the traditional non-negative matrix decomposition algorithm in a series of evaluation metrics.
Computer Graphics & Multimedia
Image Segmentation Based on Deep Learning:A Survey
HUANG Wenke, TENG Fei, WANG Zidan, FENG Li
Computer Science. 2024, 51 (2): 107-116.  doi:10.11896/jsjkx.230900002
Abstract PDF(1716KB) ( 1869 )   
References | Related Articles | Metrics
Image segmentation is a fundamental task in computer vision and its main purpose is to extract meaningful and cohe-rent regions from the image input.Over the years,a wide variety oftechniques have been developed in the field of image segmentation,including those based on traditional methods,as well as more recent image segmentation techniques utilizing convolutional neural networks.With the development of deep learning,more deep learning algorithms have been applied to image segmentation tasks.In particular,there has been a surge of scholarly interest in deep learning over the past two years,and many deep learning algorithms have emerged for image segmentation tasks.However,most of the new algorithms have not been summarized or analyzed,which will hinder the progress of subsequent research.This paper provides a comprehensive review of literatures on deep learning-based image segmentation research published in the past two years.First,it briefly introduces common datasets for image segmentation.Next,it clarifies new classifications for image segmentation based on deep learning.Finally,the existing challenges are discussed and the future research directions are prospected.
Unsupervised Learning of Monocular Depth Estimation:A Survey
CAI Jiacheng, DONG Fangmin, SUN Shuifa, TANG Yongheng
Computer Science. 2024, 51 (2): 117-134.  doi:10.11896/jsjkx.230400197
Abstract PDF(3783KB) ( 1720 )   
References | Related Articles | Metrics
As the key point of 3D reconstruction,automatic driving and visual SLAM,depth estimation has always been a hot research direction in the field of computer vision,among which,monocular depth estimation technology based on unsupervised learning has been widely concerned by academia and industry because of its advantages of convenient deployment,low computational cost and so on.Firstly,this paper reviews the basic knowledge and research actuality of depth estimation and briefly introduces the advantages and disadvantages of depth estimation based on parametric learning,non-parametric learning,supervised learning,semi-supervised learning and unsupervised learning.Secondly,the research progress of monocular depth estimation based on unsupervised learning is summarized comprehensively.The monocular depth estimation based on unsupervised learning is summarized according to five categories:combination of interpretable mask,combination of visual odometer,combination of prior auxi-liary information,combination of generated adversarial network and real-time lightweight network,and the typical framework model is introduced and compared.Then,the application of monocular depth estimation based on unsupervised learning in medicine,autonomous driving,agriculture,military and other fields is introduced.Finally,the common data sets used for unsupervised depth estimation are briefly introduced,and the future research direction of monocular depth estimation based on unsupervised learning is proposed,while the prospects of various research directions in this rapidly growing field are also prospected.
Medical Image Segmentation Algorithm Based on Self-attention and Multi-scale Input-Output
DING Tianshu, CHEN Yuanyuan
Computer Science. 2024, 51 (2): 135-141.  doi:10.11896/jsjkx.221100260
Abstract PDF(2429KB) ( 1813 )   
References | Related Articles | Metrics
Refined fundus image segmentation results of diabetic retinopathy can better assist doctors in diagnosis.The appea-rance of large scale and high resolution segmentation data sets provides favorable conditions for more refined segmentation.The mainstream segmentation network based on U-Net,using convolution operation based on local operation,cannot fully excavate global information when making pixel prediction.The network model adopts single-input single-output structure,which makes it difficult to obtain multi-scale feature information.In order to maximize the use of existing large-scale high-resolution fundus image focal segmentation data sets and achieve more refined segmentation,better segmentation methods need to be designed.In this paper,U-Net is transformed based on the self-attention mechanism and multi-scale input/output structure,and a new segmentation network,SAM-Net,is proposed.The self-attention module is used to replace the traditional convolutional module,and the ability of the network to obtain global information is increased.The multi-scale input and multi-scale output structures are introduced to make it easier for the network to obtain multi-scale feature information.The image slicing method is used to reduce the input size of the model,so as to prevent the training difficulty of the neural network model from increasing due to the large pixel of the input picture.Finally,experimental results on IDRiD and FGADR data sets show that SAM-Net can achieve better performance than other methods.
Multi-guided Point Cloud Registration Network Combined with Attention Mechanism
LIU Xuheng, BAI Zhengyao, XU Zhu, DU Jiajin, XIAO Xiao
Computer Science. 2024, 51 (2): 142-150.  doi:10.11896/jsjkx.230200073
Abstract PDF(3185KB) ( 1684 )   
References | Related Articles | Metrics
This paper proposes a point cloud alignment network,AMGNet,which uses the probability matrix of matching points between point clouds and the spatial information feature matrix of point clouds to search for correspondence and determine the weights of corresponding points with each other.First,the point cloud feature extraction network is used to get the high-dimensional features of the two unaligned point clouds and then the Transformer is used to fuse the independent features with the contextual information.Also,the weight assignment uses the strategy of double matrix co-determination.Finally,the singular value decomposition is used to obtain the required rigid transformation matrix.Several experiments are conducted on synthetic datasets,such as ModelNet40,7Scenes and real scenes.The results show that the mean square error of rotation matrix and translation vector in ModelNet40 target unknown experiments is reduced to 0.025 and 0.004 6,respectively.AMGNet alignment has high accuracy,high interference resistance,and good generalization ability.
Infrared Small Target Detection Based on Dilated Convolutional Conditional GenerativeAdversarial Networks
ZHANG Guodong, CHEN Zhihua, SHENG Bin
Computer Science. 2024, 51 (2): 151-160.  doi:10.11896/jsjkx.221200045
Abstract PDF(4901KB) ( 1708 )   
References | Related Articles | Metrics
Deep-learning based object detection methods have achieved great performance in general object detection tasks by virtue of their powerful modeling capabilities.However,the design of deeper network and the abuse of pooling operations also lead to semantic information loss which suppress their performance when detecting infrared small targets with low signal-noise-ratio and small pixel essential features.This paper proposes a novel infrared small target detection algorithm based on dilated convolution conditional generative adversarial network.A dilated convolution stacked generative network makes full use of context information to establish layer-to-layer correlations and facilitate semantic information retainment of infrared small targets in the deep network.In addition,the generative network integrates the channel-space-mixed attention module which selectively amplifies target information and suppresses background clusters.Furthermore,a self-attention association module is proposed to deal with semantic conflict generated during the fusion process between layers.A variety of evaluation metrics are used to compare the proposed method with other state-of-the-arts at present to demonstrate the superiority of the proposed method in complex backgrounds.On the public SIRST dataset,the F score of the proposed model is 64.70% which is 8.29% higher than the traditional method and 7.29% higher than the deep learning method.On the public ISOS dataset,the F score is 64.54%,which is 23.59% higher than the traditional method and 6.58% higher than the deep learning method.
Hierarchical Conformer Based Speech Synthesis
WU Kewei, HAN Chao, SUN Yongxuan, PENG Menghao, XIE Zhao
Computer Science. 2024, 51 (2): 161-171.  doi:10.11896/jsjkx.221100125
Abstract PDF(5383KB) ( 1717 )   
References | Related Articles | Metrics
Speech synthesis requires synthesizing the input speech text into a speech signal containing phonemes,words and utte-rances.Existing speech synthesis methods consider utterance as a whole,and it is difficult to synthesize different lengths of speech signals accurately.In this paper,we analyze the hierarchical relationships embedded in speech signals,design a Conformer-based hierarchical text encoder and a Conformer-based hierarchical speech encoder,and propose a speech synthesis model based on the hierarchical text-speech Conformer.First,the model constructs hierarchical text encoders according to the length of the input text signal,including three levels of phoneme level,word level,and utterance level text encoders.Each level of text encoder,describes text information of different lengths and uses Conformer’s attention mechanism to learn the relationship between different temporal features in the signal of that length.Using the hierarchical text encoder,we can find out the information that needs to be emphasized at different lengths in the utterance,and effectively achieve the extraction of text features at different lengths to alleviate the problem of uncertainty in the duration of the synthesized speech signal.Second,the hierarchical speech encoder includes three levels:phoneme level,word level,and utterance level speech encoder.For each level of speech encoder,the text features is used as the query vector of the Conformer,and the speech features are used as the keyword vector and value vector of the Conformer to extract the matching relationship between text features and speech features.The problem of inaccurate synthesis of diffe-rent length speech signals can be alleviated by using hierarchical speech encoder and text-to-speech matching relations.The hie-rarchical text-to-speech encoder modeled in this paper can be flexibly embedded into a variety of existing decoders to provide more reliable speech synthesis results through the complementarity between text and speech.Experimental validation is performed on two datasets,LJSpeech and LibriTTS,and experimental results show that the Mel inversion distortion of the proposed method is smaller than that of existing speech synthesis methods.
Two-stage Visible Watermark Removal Model Based on Global and Local Features for Document Images
ZHAO Jiangfeng, HE Hongjie, CHEN Fan, YANG Shubin
Computer Science. 2024, 51 (2): 172-181.  doi:10.11896/jsjkx.230600144
Abstract PDF(6094KB) ( 1726 )   
References | Related Articles | Metrics
Visible watermark is a common digital image copyright protection measure.Analysis of the removal results of watermarks can verify the effectiveness of the watermarks on images and provide reference and inspiration for watermark designers to design or add them.Currently,most watermark removal methods are based on research on natural images,while document images are also widely used in daily life.However,due to the lack of publicly available datasets for removing watermarks from document images,research on removing watermarks from such images is relatively limited.To explore the effectiveness of watermark removal methods on document images,a dataset for removing watermarks from single document images,the single document image watermark removal dataset(SDIWRD),is constructed.In the research on the removal of watermarks in document images,it is found that the removal results of existing watermark removal methods often leave watermark artifacts,such as main body artifacts or outline artifacts.To address this problem,a two-stage watermark removal model based on global and local features is proposed,which uses a two-stage half-instance normalized encoder-decoder architecture from coarse to fine.In the coarse stage,a global and local feature extraction module is designed to enhance the capture of global spatial features while preserving the extraction of local detail information,thus helping with watermark removal.In the fine stage,the fine network shares the weights of the coarse stage and constructs a recurrent feature fusion module to fully explore the important features of the coarse stage encoder and provide rich context information for the fine stage,helping with detailed watermark removal.In addition,a structure similarity loss is used to improve the visual quality of the removed watermark.The proposed method is tested on the SDIWRD dataset,and the results show that the peak signal-to-noise ratio(PSNR) is 41.21 dB,the structural similarity(SSIM) is 99.07%,and the root mean square error(RMSE) is 3.64,which are better than existing methods.In addition,the proposed method is also tested on the publicly available CLWD color watermark removal dataset,and the results showethat the PSNR is 39.31 dB,the SSIM is 98.81%,and the RMSE is 3.50,which are also better than existing watermark removal methods.These experimental results demonstrate that the proposed method has good generalization and can effectively alleviate the problem of watermark artifacts.Finally,some suggestions for preventing watermark removal are also proposed.The proposed method and dataset can be publicly accessed at the corresponding website.
Cross-scene Gesture Recognition Based on Point Cloud Trajectories and Compressed Doppler
ZHANG Hongwang, ZHOU Rui, CHENG Yu, LIU Chenxu
Computer Science. 2024, 51 (2): 182-188.  doi:10.11896/jsjkx.230400184
Abstract PDF(2510KB) ( 1668 )   
References | Related Articles | Metrics
Millimeter wave radar can be used for various sensing tasks,such as activity recognition,gesture recognition,heart rate perception.Among them,gesture recognition is a research hotspot,which can realize contactless human-computer interaction.Most existing studies on gesture recognition make use of point cloud or range-Doppler for pattern recognition through neural networks to achieve sensing.However,there are some problems.Firstly,the robustness of these methods is poor.The changes of the user and his/her location affect the received millimeter wave signals,causing the accuracy of the sensing model to reduce.Secondly,these methods input the complete range-Doppler map into the neural network,which makes the model complicated and makes it difficult for the model to focus on the sensing task,because there are many unrelated regions to the sensing task.To solve these problems,this paper first builds the gesture trajectory from multiple continuous frames of point cloud,and then cuts and compresses the multiple continuous range-Doppler maps to obtain the two-dimensional local Doppler map.Finally,the features are extracted from the point cloud trajectory and the two-dimensional local Doppler map respectively by the neural networks,concatenated and classified by a fully-connected neural network.Experiments show that the proposed method focuses on gestures and can achieve a recognition accuracy of 98%,and can achieve a recognition accuracy of 93% for new users and 92% for new locations in the cases of user changes and location changes,better than the state of the art.
LNG-Transformer:An Image Classification Network Based on Multi-scale Information Interaction
WANG Wenjie, YANG Yan, JING Lili, WANG Jie, LIU Yan
Computer Science. 2024, 51 (2): 189-195.  doi:10.11896/jsjkx.221100218
Abstract PDF(2444KB) ( 1702 )   
References | Related Articles | Metrics
Due to the superior representation capability of the Transformer’s Self-Attention mechanism,several researchers have developed Self-Attention mechanism-based image processing model and achieved great success.However,the traditional network for image classification based on Self-Attention cannot take into account global information and computational complexity,which limits the wide application of Self-Attention.This paper proposes an efficient and scalable attention module,Local Neighbor Glo-bal Self-Attention(LNG-SA),that may interact with local,neighbor,and global information at any stage.By cascading LNG-SA module,a brand-new network called LNG-Transformer is created.LNG-Transformer adopts a hierarchical structure that provides excellent flexibility,and has a computational complexity proportional to image resolution.The features of LNG-SA enable LNG-Transformer to interact with local information,neighbor information,and global information even in the early stage of high-resolution,resulting in increased efficiency and enhanced learning capacity.Experimental results show that LNG-Transformer performs well at image classification.
Novel Image Classification Model Based on Depth-wise Convolution Neural Network andVisual Transformer
ZHANG Feng, HUANG Shixin, HUA Qiang, DONG Chunru
Computer Science. 2024, 51 (2): 196-204.  doi:10.11896/jsjkx.221100234
Abstract PDF(3194KB) ( 1710 )   
References | Related Articles | Metrics
Deep learning-based image classification models have been successfully applied in various scenarios.The current image classification models can be categorized into two classes:the CNN-based classifiers and the Transformer-based classifiers.Due to its limited receptive field,the CNN-based classifiers cannot model the global relation of image,which decreases the classification accuracy.While the Transformer-based classifiers usually segmente the image into non-overlapping image patches with equal size,which harms the local information between each pair of adjacent image patches.Additionally,the Transformer-based classification models often require pre-training on large datasets,resulting in high computational costs.To tackle these problems,an efficient pyramid vision Transformer(EPVT) based on depth-wise convolution is proposed in this paper to extract both the local and glo-bal information between adjacent image patches at a low computational cost.The EPVT model consists of three key components:local perception module(LP),spatial information fusion module(SIF) and convolutional feed-forward network module(CFFN).The LP module is used to capture the local correlation of image patches.SIF module is used to fuse local information between adjacent image patches and improve the feature expression ability of the proposed EPVT by utilizing the long-distance dependence between different image patches.CFFN module is used to encode the location information and reconstruct tensors between feature image patches.To validate the proposed EPVT model’s performance,various experiments are conducted on the benchmark datasets,and experimental results show the EPVT achieves 82.6% classification accuracy on ImageNet-1K,which outperforms most of the SOTA models with lower computational complexity.
Recursive Gated Convolution Based Super-resolution Network for Remote Sensing Images
LIU Changxin, WU Ning, HU Lirui, GAO Ba, GAO Xueshan
Computer Science. 2024, 51 (2): 205-216.  doi:10.11896/jsjkx.230800017
Abstract PDF(4547KB) ( 1718 )   
References | Related Articles | Metrics
Due to hardware manufacturing constraints,it is usually difficult to obtain high-resolution(HR) images in the area of remote sensing.From low resolution remote-sensing image to reconstruct high-resolution(HR) image via single-image super-re-solution(SISR) technique is a common method.Recently,the convolutional neural network(CNN) was introduced to the field of super-resolution image reconstruction,and it effectively improved the image reconstruction performance.However,the classic CNN-based approaches typically use low-order attention to extract deep features,which limites its reconstructing ability.More-over,the receptive field is limited,which lacks the ability to learn long-range dependency.To solve the above problems,a recursive gated convolution-based super-resolution method for remote sensing images(RGCSR) is proposed.The RGCSR introduces recursive gated convolution(gnConv) to learn global dependencies and local details,and high-order features are acquired by high-order spatial interactions.Firstly,a high-order interaction—feedforward neural network(HFB) consisting of a high-order interaction sub-module(HorBlock) and a feedforward neural network(FFN) is applied to extract high-order features.Then,a feature optimization module(FOB) contains channel attention(CA) and gnConv is used to optimize the output features of each intermediate module.Finally,the comparison results on multiple datasets show that RGCSR has better reconstruction and visualization performances than existing CNN-based solutions.
Artificial Intelligence
Survey of Event Extraction in Low-resource Scenarios
LIU Tao, JIANG Guoquan, LIU Shanshan, LIU Liu, HUAN Zhigang
Computer Science. 2024, 51 (2): 217-237.  doi:10.11896/jsjkx.221200142
Abstract PDF(2161KB) ( 1007 )   
References | Related Articles | Metrics
As one of the tasks of information extraction,event extraction aims to extract structured event information from unstructured text.The current automated information extraction methods based on machine learning and deep learning rely on labeled data excessively,but standard datasets in most areas are small and unevenly distributed.So the low-resource scenarios become an important bottleneck that limits the performance of automated information extraction.Although in recent years,many scholars have conducted in-depth research on low resource scenarios and produced many remarkable results,there is still a lack of research on event extraction in this scenario at present.This paper makes a comprehensive summary and analysis of existing academic achievements.Firstly,it introduces the definition of related task,and the task of event extraction in low resource scenarios is divided into three categories.Then six kinds of related techniques and methods are discussed around this classification,including transfer learning based,prompt learning based,unsupervised learning based,weakly supervised learning based,data and au-xiliary knowledge enhancement based,and meta learning based approaches.Subsequently,the shortcomings of current methods and strategies for future improvement are pointed out.Then the related datasets and evaluation metrics are introduced and the experimental results of typical techniques are summarized and analyzed.Finally,the challenges and future research trends about event extraction in low resource scenarios are summarized and analyzed from a global perspective.
Hierarchical Document Classification Method Based on Improved Self-attention Mechanism and Representation Learning
LIAO Xingbin, QIAN Yangge, WANG Qianlei, QIN Xiaolin
Computer Science. 2024, 51 (2): 238-244.  doi:10.11896/jsjkx.221100266
Abstract PDF(2239KB) ( 987 )   
References | Related Articles | Metrics
An essential task of document classification is to study how to effectively represent input features,and sentence and document vector representations can assist in downstream tasks in natural language processing,such as text sentiment analysis and data leakage prevention.Feature representation is also increasingly becoming one of the keys to performance bottlenecks and interpretability of document classification problems.A hierarchical document classification model is proposed to address thepro-blems of extensive repetitive computation and lack of interpretability faced by existing hierarchical models,and the performance effects of sentence and document representations on the document classification problem are investigated.The proposed model integrates a sentence encoder and a document encoder that fuses input feature vectors using an improved self-attention mechanism,forming a hierarchy to enable hierarchical processing of document-level data,simplifying the computation while enhancing the interpretability of the model.Compared with the model that only uses the special token vector of pre-trained models as sentence representation,the proposed model can achieve an average of 4% performance improvements on five public document classification datasets,and an average of about 2% higher than the model that uses mean attention outputs of word vector matrix.
Local Interpretable Model-agnostic Explanations Based on Active Learning and Rational Quadratic Kernel
ZHOU Shenghao, YUAN Weiwei, GUAN Donghai
Computer Science. 2024, 51 (2): 245-251.  doi:10.11896/jsjkx.230300028
Abstract PDF(2339KB) ( 956 )   
References | Related Articles | Metrics
With the widespread use of deep learning models,people are more aware that the decision-making of model is a problem that needs to be solved urgently.Complex and difficult-to-interpret black-box models hinder the deployment of algorithms in actual scenarios.LIME is the most popular method of local interpretation,but the resulting perturbed data is unstable,leading to bias in the final explanation.To solve the above problems,local interpretable model-agnostic explanations based on active learning and rational quadratic kernel,ActiveLIME,is proposed,which makes the local interpretable model more faithful to the original classifier.After ActiveLIME generates the perturbed data,it samples the perturbation through the query strategy of active lear-ning,selects the perturbation with high uncertainty for training,and uses the local model with the highest accuracy in the iteration to generate explanations for the instances of interest.And for high-dimensional sparse samples that are prone to local overfitting,a rational quadratic kernel is introduced into model’s loss function to reduce overfitting.Experiments indicate that the proposed ActiveLIME has better local fidelity and quality of explanations than traditional local explanation algorithms.
Option-Critic Algorithm Based on Mutual Information Optimization
LI Junwei, LIU Quan, XU Yapeng
Computer Science. 2024, 51 (2): 252-258.  doi:10.11896/jsjkx.221100019
Abstract PDF(2902KB) ( 959 )   
References | Related Articles | Metrics
As an important research content of hierarchical reinforcement learning,temporal abstraction allows hierarchical reinforcement learning agents to learn policies at different time scales,which can effectively solve the sparse reward problem that is difficult to deal with in deep reinforcement learning.How to learn excellent temporal abstraction policy end-to-end is always a research challenge in hierarchical reinforcement learning.Based on the Option framework,Option-Critic can effectively solve the above problems through policy gradient theory.However,in the process of policy learning,the OC framework will have the degradation problem that the action distribution of the internal option policies becomes very similar.This degradation problem affects the experimental performance of the OC framework and leads to poor interpretability of the Option.In order to solve the above problems,mutual information knowledge is introduced as the internal reward,and an Option-Critic algorithm with mutual information optimization is proposed.The MIOOC algorithm combines the proximal policy Option-Critic algorithm to ensure the diversity of the lower level policies.In order to verify the effectiveness of the algorithm,the MIOOC algorithm is compared with several common reinforcement learning methods in continuous experimental environments.Experimental results show that the MIOOC algorithm can speed up the learning speed of the model,improve its experimental performance,and its Option internal strategy is more discriminative.
Semi-supervised Learning Algorithm Based on Maximum Margin and Manifold Hypothesis
DAI Wei, CHAI Jing, LIU Yajiao
Computer Science. 2024, 51 (2): 259-267.  doi:10.11896/jsjkx.221100136
Abstract PDF(2203KB) ( 958 )   
References | Related Articles | Metrics
Semi-supervised learning is a weakly supervised learning pattern between supervised learning and unsupervised lear-ning.It combines a small number of labeled instances with a large number of unlabeled instances to build a model during the process of learning,hoping to achieve better learning accuracy than supervised learning using only labeled instances.In this lear-ning pattern,this paper proposes a semi-supervised learning algorithm that combines the maximum margin with manifold hypo-thesis of the instance space.The algorithm utilizes the manifold structure of instances to estimate the labeling confidence over unlabeled instances,at the same time utilizes the maximum margin to derive the classification model.And alternating optimization is adopted to address the quadratic programming problem of the model parameters and the labeling confidence in an iterative manner.On 12 UCI datasets and 4 datasets generated by the MNIST database of handwritten digits,in semi-supervised transductive learning,the proposed algorithm’s performance outperforms the comparison algorithms for 60.5% of the configurations in semi-supervised inductive learning,the proposed algorithm’s performance outperforms the comparison algorithms for 42.6% of the configurations.
DQN-based Multi-agent Motion Planning Method with Deep Reinforcement Learning
SHI Dianxi, PENG Yingxuan, YANG Huanhuan, OUYANG Qianying, ZHANG Yuhui, HAO Feng
Computer Science. 2024, 51 (2): 268-277.  doi:10.11896/jsjkx.230500113
Abstract PDF(3970KB) ( 966 )   
References | Related Articles | Metrics
DQN as a classical value-based deep reinforcement learning method,has been widely used in the field of multi-agent motion planning.However,there are a series of challenges in DQN,such as,DQN can overestimate Q values,calculating Q values is more complicated,neural networks have no historical memory capability,using ε-greedy strategy for exploration is less efficient.To address these problems,a DQN-based multi-agent deep reinforcement learning motion planning method is proposed,which can help the agents learn an efficient and stable motion planning strategy,so as to reach the target points without collision.Firstly,based on the DQN method,an optimization mechanism for Q value calculation based on Dueling is proposed,which improves the calculation of Q value to calculate the state value and the advantage function value,and selects the optimal action based on the parameters of the Q value network that is currently being updated,making the calculation of Q value simpler and more accurate.Secondly,a memory mechanism based on GRU is proposed,and a GRU module is introduced,which enables the network to capture the temporal information and has the ability to process the historical information of the agents.Thirdly,an effective exploration mechanism based on noise is proposed,which changes the exploration mode in DQN by introducing parameterized noise,improves the exploration efficiency of the agents,and makes the multi-agent system reach the exploration-utilization equilibrium state.It is tested on PyBullet simulation platform in six different simulation scenarios,and the results show that the proposed method can enable multi-agent teams to collaborate efficiently and reach their respective target points without collision,and the strategy training process is more stable.
Computer Network
CARINA:An Efficient Application Layer Protocol Conversion Approach for IoT Interoperability
WANG Lina, LAI Kunhao, YANG Kang
Computer Science. 2024, 51 (2): 278-285.  doi:10.11896/jsjkx.230100108
Abstract PDF(2122KB) ( 924 )   
References | Related Articles | Metrics
To solve the interoperability problems caused by numerous IoT devices and protocols with varying architectures and application scenarios,this paper proposes an efficient and scalable application layer protocol conversion approach.This approach uses protocol packet parsing and key method mapping for widely used HTTP and other three protocols.Considering the significant differences in the underlying architecture,message format,communication mode,and application scenario of the four protocols,the proposed approach solves the uniformity of information storage for different protocols by parsing the original data pa-ckets of the protocols and extracting key information,and storing the information in the form of key-value pairs.By constructing the key method mapping table,the methods of different protocols are mapped,realizing the interconnection between different protocols.Experimental results show that the proposed approach performs well in message conversion between the four protocols.It demonstrates a significantly improved conversion speed compared to the Ponte method of a comparable type,with a nearly 10-fold difference observed in some cases when subjected to the same test conditions.Furthermore,it supports twice as many conversion types as Ponte.Experimental results show that the proposed method outperforms state-of-the-art methods in terms of scalability and efficiency.
Online Task Offloading Decision Algorithm for High-speed Vehicles
DING Shuang, CAO Muyu, HE Xin
Computer Science. 2024, 51 (2): 286-292.  doi:10.11896/jsjkx.221200069
Abstract PDF(2469KB) ( 947 )   
References | Related Articles | Metrics
When and where to offloading tasks are the main problems to be solved in the task offloading decision in vehicular edge computing.High speed driving of the vehicle causes frequent changes of offloading access devices,and the offloading communication between the vehicle and the offloading access device may break at any time.This requires that the offloading decision should be made immediately once the vehicle obtains an offloading opportunity.The existing offloading decision research focuses on how to maximize the offloading gain,without fully considering the impact of the timeliness of offloading decision on offloading strategy.As a result,the proposed offloading decision methods have high time and space complexity,and cannot be used for online task offloading decisions of high-speed vehicles.In order to solve the above problems,this paper first comprehensively considers the influence of offloading decision-making timeliness and offloading gain factors,establishes a task offloading decision model for high-speed vehicles,and transforms it into a variation of the secretary problem.Then,an online vehicle task offloading decision algorithm OODA based on weighted bipartite graph matching is proposed to assist the vehicle to make real-time task offloading decisions when passing through multiple heterogeneous edge servers sequentially,and maximize the overall offloading gain.Finally,theoretical analysis shows that the competitive ratio of OODA algorithm is analyzed theoretically.Extensive simulation results show that OODA is feasible and effective.
Study on Deep Reinforcement Learning for Energy-aware Virtual Machine Scheduling
WANG Yangmin, HU Chengyu, YAN Xuesong, ZENG Deze
Computer Science. 2024, 51 (2): 293-299.  doi:10.11896/jsjkx.230100031
Abstract PDF(2327KB) ( 909 )   
References | Related Articles | Metrics
With the rapid development of computer technology,cloud computing technology has become one of the best ways to solve users’ storage and computing power demands.Among them,dynamic virtual machine scheduling based on NUMA architecture has become a hot topic in academia and industry.However,in current research,heuristic algorithms are difficult to schedule virtual machines in real time,and most of the literatures do not consider the energy consumption caused by virtual machine sche-duling under NUMA architecture.This paper proposes a service migration framework of large-scale mobile cloud center virtual machine based on deep reinforcement learning,and constructs the energy consumption model under NUMA architecture.Hierarchical adaptive sampling soft actor critic(HASAC) is proposed.In the cloud computing scenario,the proposed algorithm is compared with the classical deep reinforcement learning methods.Experiment results show that the improved algorithm proposed in this paper can handle more user requests in different scenarios,and consumes less energy.In addition,experiments on various strategies in the algorithm prove the effectiveness of the proposed strategy.
Study on Cache-oriented Dynamic Collaborative Task Migration Technology
ZHAO Xiaoyan, ZHAO Bin, ZHANG Junna, YUAN Peiyan
Computer Science. 2024, 51 (2): 300-310.  doi:10.11896/jsjkx.230600128
Abstract PDF(5188KB) ( 924 )   
References | Related Articles | Metrics
Task migration technology has been propelled by the continuous emergence of compute-intensive and delay-sensitive services in edge networks.However,the process of task migration is hindered by technical bottlenecks such as complex and time-varying application scenarios,as well as the high difficulty in problem modeling.Especially when considering user movement,designing a reasonable task migration strategy that ensures the stability and the continuity of user service remains a persistent challenge.Therefore,a mobile-aware service pre-caching model and task pre-migration strategy are proposed to transform the problem of task migration into an optimization problem that combines optimal clustering strategies with edge service pre-caching.First of all,the current state of the task is initially predicted based on the user′s movement trajectory.To solve the problem of when and where to migrate,a pre-migration model for two task scenarios,namely mobile and load,is proposed by introducing the concept of dynamic cooperation cluster and migration prediction radius.And then,according to the tasks that need to be migrated,the maximum tolerant delay constraint is utilized to derive the limit value of cooperative cluster radius and target server quantity in a cluster.Subsequently,a user-centric distributed dynamic multi-server cooperative clustering algorithm(DDMC) and a cache-based double deep Q network algorithm(C-DDQN) for service are proposed to solve the problem of optimal clustering and service ca-ching.Finally,a low-complexity alternate minimization service cache location update algorithm is designed using the causality of service caches to achieve the optimal set of migration target servers,which realize server collaboration and network load balancing in task migration.Experimental results demonstrate the robustness and the system performance of the proposed migration selection algorithm.Compared with other algorithms,the total cost consumed is reduced by at least 12.06%,the total latency consumed is reduced by at least 31.92%.
EAGLE:A Network Telemetry Mechanism Based on Telemetry Data Graph in Kernel and UserMode
XIAO Zhaobin, CUI Yunhe, CHEN Yi, SHEN Guowei, GUO Chun, QIAN Qing
Computer Science. 2024, 51 (2): 311-321.  doi:10.11896/jsjkx.221100196
Abstract PDF(4055KB) ( 870 )   
References | Related Articles | Metrics
Network telemetry is a new type of network measurement technology,which has the characteristics of strong real-time performance,high accuracy and low overhead.Existing network telemetry technologies have problems such as being unable to collect multi-granularity network data,unable to effectively store a large amount of original network data,unable to quickly extract and generate network telemetry information,and unable to design network telemetry solutions using kernel-mode and user-mode features.In order to solve the above problems,this paper proposes a multi-granularity,scalable,and network-wide network tele-metry mechanismEAGLE,which integrates kernel mode and user mode,and is based on telemetry data graphs and synchronization control blocks.EAGLE has designed a flexible and controllable network telemetry packet structure on the data plane that can collect multi-granularity data,and is used to obtain the data required by upper-layer applications.In addition,in order to quickly store,query,count,and aggregate network status data,and realize the rapid extraction and generation of telemetry data required by network telemetry packets,EAGLE proposes a network telemetry information generation method based on telemetry data graphs and synchronization control blocks.On this basis,in order to maximize the processing efficiency of network telemetry packets in the network telemetry mecha-nism,EAGLE proposes a network telemetry information embedding architecture that integrates the characteristics of kernel state and user state.Finally,this paper implements and tests the EAGLE scheme on Open vSwitch.The test results show that EAGLE can collect multi-granularity data and quickly extract and generate telemetry data with only a little increase in processing time and resource usage.
Information Security
High-dimensional Data Publication Under Local Differential Privacy
CAI Mengnan, SHEN Guohua, HUANG Zhiqiu, YANG Yang
Computer Science. 2024, 51 (2): 322-332.  doi:10.11896/jsjkx.230600142
Abstract PDF(3203KB) ( 1525 )   
References | Related Articles | Metrics
With the increasing availability of high-dimensional data collected from numerous users,preserving user privacy while utilizing high-dimensional data poses significant challenges.This paper focuses on the problem of high-dimensional data publication under local differential privacy.State-of-the-art solutions first construct probabilistic graphical models to generate a set of noisy low-dimensional marginal distributions of the input data,and then use them to approximate the joint distribution of the input dataset for generating synthetic datasets.However,existing methods have limitations in computing marginal distributions for a large number of attribute pairs to construct probabilistic graphical models,as well as in calculating joint distributions for attribute subsets within the probabilistic graphical models.To address these limitations,this paper proposes a method PrivHDP(high-dimensional data publication under local differential privacy) for high-dimensional data publication under local differential privacy.Firstly,it uses random sampling response instead of the traditional privacy budget splitting strategy to perturb user data.It proposes an adaptive marginal distribution computation method to compute the marginal distributions of pairwise attributes and construct a Markov network.Secondly,it employs a novel method to measure the correlation between pairwise attributes,replacing mutual information.This method introduces a threshold technique based on high-pass filtering to reduce the search space during the construction of the probabilistic graphical model.It combines sufficient triangulation operations and a joint tree algorithm to obtain a set of attribute subsets.Finally,based on joint distribution decomposition and redundancy elimination,the proposed method computes the joint distribution over attribute subsets.Experimental results on four real datasets demonstrate that the PrivHDP algorithm outperforms similar algorithms in terms of k-way query and SVM classification accuracy,validating its effectiveness and efficiency.
Research and Implementation of MQTT Security Mechanism Based on Domestic CryptographicAlgorithms
LIU Zechao, LIANG Tao, SUN Ruochen, HAO Zhiqiang, LI Jun
Computer Science. 2024, 51 (2): 333-342.  doi:10.11896/jsjkx.221100157
Abstract PDF(2813KB) ( 1521 )   
References | Related Articles | Metrics
Aiming at the problem that existing MQTT protocol lacks effective identity authentication and data plaintext transmission,an MQTT security protection scheme is designed based on domestic cryptography algorithms SM2,SM3 and SM4.Two-way identity authentication between the client and MQTT Broker is realized by SM2 algorithm.SM4 algorithm is used to encrypt the username,password,and message contents of subjects in MQTT protocol.SM3 algorithm is used to ensure the integrity of data transmitted by MQTT protocol.Applying self-controllable domestic cryptography technology to MQTT protocol can effectively improve the security protection capability of the protocol.The security analysis and experimental results show that the proposed scheme can not only solve the security problem of MQTT protocol,but also meet the practical application requirements.
Screen-shooting Resilient DCT Domain Watermarking Method Based on Deep Learning
HUANG Changxi, ZHAO Chengxin, JIANG Xiaoteng, LING Hefei, LIU Hui
Computer Science. 2024, 51 (2): 343-351.  doi:10.11896/jsjkx.221200121
Abstract PDF(4121KB) ( 1566 )   
References | Related Articles | Metrics
Digital watermarking technology plays an important role in multimedia protection,and the various demands for practical applications promotes the development of digital watermarking technology.Recently,the robustness of the deep learning-based watermarking model has been greatly improved,but the embedding process is mostly carried out in the spatial domain,and this causes obvious distortions to original images.In addition,existing methods do not work well under the screen-shooting attack.To solve the above problems,this paper proposes a deep learning-based DCT domain watermarking method which is robust to the screen-shooting attack.This model consists of a DCT layer,an encoder,a decoder,and a screen shoot simulation layer.The DCT layer converts the Y component of images into the DCT domain,then the encoder embeds secret messages into the image by mo-difying the DCT coefficients through end-to-end training.This embedding method in the frequency domain makes the watermark information to be distributed to the whole space of images so that the distortion effect is reduced.Furthermore,we propose a noise layer to simulate moiré and light reflection effects,which are common distortions in the screen-shooting attack.The training process is splitted into two stages.In the first stage,the encoder and decoder are trained end-to-end.While in the second stage,the screen-shooting simulation layer and traditional distortion attacks are used to augment the watermarked image,then we use the distorted watermarked image to furtheroptimize the decoder.Extensive experimental results show that the proposed model has high transparency and robustness,and is superior to other methods in screen robustness.
Memory Security Vulnerability Detection Combining Fuzzy Testing and Dynamic Analysis
MA Yingzi, CHEN Zhe, YIN Jiale, MAO Ruiqi
Computer Science. 2024, 51 (2): 352-358.  doi:10.11896/jsjkx.221200136
Abstract PDF(1480KB) ( 1514 )   
References | Related Articles | Metrics
C language is widely used in the development of system software and embedded software due to its high speed and precise control of memory through pointers,and is one of the most popular programming languages.The power of pointers makes it possible to operate directly on memory.However,C does not provide detection of memory security,which makes the use of poin-ters can lead to memory errors like memory leaks,buffer overflows,multiple releases,and sometimes these errors can cause fatal damage such as system crashes or internal data corruption.At present,there are some techniques that can detect memory security vulnerabilities in C programs.Among them,dynamic analysis technique can detect memory safety of C programs at runtime by staking the source code,but it can only find the error when the program executes to the path where the error is located,so it relies on the program’s input. While fuzzy testing is a method to find software vulnerabilities by providing input to the program and monitoring the program’s operation results,but it cannot detect memory safety errors that do not cause the program to crash,nor can it provide detailed information such as the location of the error.It also does not provide detailed information such as the location of the error.In addition,due to the complex grammar of the C language,dynamic analysis tools often fail to correctly handle some uncommon specific structures when analyzing large and complex projects,resulting in stubbing failures or stubbed programs not being compiled correctly.To address these problems, this paper proposes a method that can detect the memory safety of C programs containing specific structures by combining dynamic analysis techniques with fuzzy testing techniques and improving existing methods.The reliability and performance experiments show that with the addition of C-specific structures,the memory safety of programs containing C-specific structures can be detected,and the combination of the fuzzy testing technique can have stronger vulnerability detection capability.
SGPot:A Reinforcement Learning-based Honeypot Framework for Smart Grid
WNAG Yuzhen, ZONG Guoxiao, WEI Qiang
Computer Science. 2024, 51 (2): 359-370.  doi:10.11896/jsjkx.221100187
Abstract PDF(4599KB) ( 1524 )   
References | Related Articles | Metrics
With the rapid advancement of Industry 4.0,the supervisory control and data acquisition(SCADA) system,which is interconnected with Industry 4.0,is gradually becoming more informationized and intelligent.There are various security hazards in the SCADA system caused by the vulnerability of the system and the disparity in attack and defense capability.Due to the frequency of power attacks in recent years,there has been an urgency to propound attack mitigation measures for smart grid.Honeypots,as an efficient deception defense method,can effectively collect attacks in smart grids.To address the issues of insufficient interaction depth,deficiency of physical industrial process simulation,and poor scalability in existing smart grid honeypots,this paper designs and implements a reinforcement learning-based smart grid honeypot framework—SGPot.It can simulate control side of a smart substation based on the system invariants in real devices of the power industry.Through the simulation of the power business process,the SGPot can enhance the deception of the honeypot and induce attackers to interact deeply with the honeypot.In order to evaluate the performance of the honeypot framework,this paper builds a small smart substation experimental validation environment.Meanwhile,SGPot,the existing GridPot and SHaPe honeypots are simultaneously deployed in the public network environment,and 30 days of interaction data are collected.According to the experimental results of this paper,the request data collected by SGPot is 20% more than GridPot and 75% more than SHaPe.SGPot can induce attackers to interact with the honeypot in greater depth than GridPot and SHaPe,and it obtains more sessions with interaction lengths greater than 6.
Secure Multiparty Computation of Set Intersection and Union
XIE Qiong, WANG Weiqiong, XU Haojie
Computer Science. 2024, 51 (2): 371-377.  doi:10.11896/jsjkx.221000235
Abstract PDF(1970KB) ( 1547 )   
References | Related Articles | Metrics
Secure multiparty computation of sets is one of the most important problems in confidential scientific computing research,which has significant applications in electronic election,threshold signature,and confidential auction.This paper mainly studies secure set operations for multiple parties.Corresponding coding methods are proposed for different set operations to transform sets into vectors,and then these vectors are divided in pairs and encoded by Gödel coding.Combined with the ElGamal threshold encryption algorithm with homomorphism,several secure computing protocols for set intersection and union operations are designed in the semi-honest model.These protocols can resist any collusive attack of arbitrary parties and the simulation paradigm is used to prove that these proposed protocols are secure in the semi-honest model.The protocols’ efficiency is verified by experiments.When the cardinality of set meets certain conditions,the proposed protocols have higher computational efficiency compared with the existing schemes.
A Meet-in-the-middle Attack Method of Deoxys-BC
LI Zheng, LI Manman, CHEN Shaozhen
Computer Science. 2024, 51 (2): 378-386.  doi:10.11896/jsjkx.230900112
Abstract PDF(5218KB) ( 1544 )   
References | Related Articles | Metrics
The Deoxys-BC adopting the SPN structure and TWEAK framework is a lightweight tweakable block cipher published at ASIACRPYPT 2014.By researching the internal characteristic and key schedule of the Deoxys-BC,a 6-round meet-in-the-middle distinguisher against the Deoxys-BC-256 and a 7-round meet-in-the-middle distinguisher against the Deoxys-BC-384 are constructed with controlling tweak differential,differential enumeration and tweakey differential superimposing elimination techniques.A meet-in-the-middle attack against the 9-round Deoxys-BC-256 and the 11-round Deoxys-BC-384 are improved by using the distinguisher.The attacks can reduce the number of guessed bytes and achieve a reduction in the complexity.Compared with the existing meet-in-the-middle attack results of Deoxys-BC,its time complexity and storage complexity are significantly reduced