
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK


-
Overview of Security Technologies and Strategies for Intelligent Railway 5G
李盼盼, 吴昊, 刘佳佳, 段莉, 卢云龙. 智能铁路5G安全技术与策略综述[J]. 计算机科学, 2024, 51(5): 1-11.
LI Panpan, WU Hao, LIU Jiajia, DUAN Li, LU Yunlong. Overview of Security Technologies and Strategies for Intelligent Railway 5G[J]. Computer Science, 2024, 51(5): 1-11. - LI Panpan, WU Hao, LIU Jiajia, DUAN Li, LU Yunlong
- Computer Science. 2024, 51 (5): 1-11. doi:10.11896/jsjkx.231000104
-
Abstract
PDF(2600KB) ( 716 )
- References | Related Articles | Metrics
-
Digital technology is reshaping all walks of life,which is the only way for the development of the industry.While digital servitization technologies such as 5G empower industries such as railways,they also bring some security risks.Security is a prerequisite for all services.In order to promote the innovative applications of 5G in intelligent railway,this paper first systematically reviews the security risks and challenges faced by intelligent railway 5G from the perspectives of terminal,air interface,communication,data,system,and public-private network integration.In view of new service scenarios,we analyze the new technologies,new terminals and new applications for railways,and the new requirements of 5G security for smart railways.The new features of 5G security enhancement in aspects of password algorithm,air interface security,privacy,unified authentication,and roaming are also summarized.On this basis,the key points of smart railway 5G security are given,including certification,physical layer security,terminal security,slice security and edge computing security.For 5G private network deployment,recommendations are also given in terms of infrastructure,communication security,data security,and endogenous security defense system.
-
Survey of Research and Application of User Identity Linkage Technology in Cyberspace
王庚润. 网络空间用户身份对齐技术研究及应用综述[J]. 计算机科学, 2024, 51(5): 12-20.
WANG Gengrun. Survey of Research and Application of User Identity Linkage Technology in Cyberspace[J]. Computer Science, 2024, 51(5): 12-20. - WANG Gengrun
- Computer Science. 2024, 51 (5): 12-20. doi:10.11896/jsjkx.230300172
-
Abstract
PDF(2105KB) ( 591 )
- References | Related Articles | Metrics
-
In recent years,with the development of mobile Internet technology and the increase of user demand,there are more and more various virtual accounts in cyberspace,and users always have multiple accounts in different applications or even the same platform.At the same time,due to the virtual nature of cyberspace,the relationship between the users' virtual identity and the real social identity is usually weak,and the illegal users in cyberspace are difficult to find.Therefore,driven by the needs of service recommendation and evidence collection,the user identity linkage technology,with cyberspace user virtual identity aggregation and virtual-real identity mapping as the main research content,has been developed rapidly.For this reason,the user identity linkage technology in cyberspace is summarized.Firstly,the scientific problems solved by this technology is defined,and then the typical characteristics of the user identity and the related technologies are introduced.Finally,the datasets and verification stan-dards are presented,and the challenges of the user identity linkage are discussed.
-
Discipline Competition Evaluation Model Based on Multi-attribute Comprehensive Evaluation
邢存远, 张洁, 金莹. 基于多属性综合评价的院校竞赛评估模型[J]. 计算机科学, 2024, 51(5): 21-26.
XING Cunyuan, ZHANG Jie, JIN Ying. Discipline Competition Evaluation Model Based on Multi-attribute Comprehensive Evaluation[J]. Computer Science, 2024, 51(5): 21-26. - XING Cunyuan, ZHANG Jie, JIN Ying
- Computer Science. 2024, 51 (5): 21-26. doi:10.11896/jsjkx.230200202
-
Abstract
PDF(2929KB) ( 456 )
- References | Related Articles | Metrics
-
The college student competition in China is booming,both the holding and participation of events show a positive trend.The discipline competition of college students can reflect the discipline development level and teaching level of participating colleges,and the analysis and comparison of the level of colleges based on the competition data can also promote the colleges' attention and participation in the competition to a certain extent.In previous research and practical application,the evaluation of the school competition level is mostly limited to stacking award scores.The model based on the “award-only” theory is one-sided because it ignores the development level of colleges and universities.Activity,performance,and stability indexes can describe and evaluate the competition level of participating colleges and universities.The optimal weight of the index is determined by scatter degree method so as to obtain the scores of colleges and universities.In addition,according to the characteristics of the discipline competition,different performances of colleges and universities on different courses of the competition can be used as more detailed characteristics.The participating colleges and universities are divided into four types through t-SNE dimension reduction,visualization,and cluster analysis.For different types of colleges and universities,specific suggestions to improve the performance of the competition are proposed in this paper.The data from Jiangsu College Student Computer Design Competition since its inception is used to verify the validity of the model.
-
ST-WaveMLP:Spatio-Temporal Global-aware Network for Traffic Flow Prediction
包锴楠, 张钧波, 宋礼, 李天瑞. ST-WaveMLP:面向交通流量预测的时空全局感知网络模型[J]. 计算机科学, 2024, 51(5): 27-34.
BAO Kainan, ZHANG Junbo, SONG Li, LI Tianrui. ST-WaveMLP:Spatio-Temporal Global-aware Network for Traffic Flow Prediction[J]. Computer Science, 2024, 51(5): 27-34. - BAO Kainan, ZHANG Junbo, SONG Li, LI Tianrui
- Computer Science. 2024, 51 (5): 27-34. doi:10.11896/jsjkx.230100086
-
Abstract
PDF(3240KB) ( 563 )
- References | Related Articles | Metrics
-
Traffic flow prediction plays an incredibly important role in intelligent transportation systems.Accurate traffic flow prediction can not only benefit transportation management but also provide appropriate travel plans for people.However,it is very challenging and the main difficulty lies in how to capture the complex spatial and temporal dependencies.In recent years,deep learning methods,mainly based on convolutional neural networks,have been successfully applied to traffic forecasting tasks.However,convolutional neural networks mainly focus on the extraction and integration of spatial features in data,so it is difficult to fully explore the complex spatio-temporal dependencies.Moreover,single-layer convolutional networks can only capture the local spatial dependencies,it is necessary to stack multiple layers of convolutional networks to capture the global spatial dependencies,which will slow down the convergence speed of the whole network model training.To solve these problems,a global-aware spatio-temporal network model(called ST-WaveMLP) for traffic prediction is proposed,which mainly employs a multi-layer perceptron based repeatable structure ST-WaveBlock to capture the complex spatio-temporal dependencies.ST-WaveBlock has an excellent spatio-temporal representation learning capability,often using only 2~4 ST-WaveBlock stacks to effectively capture the spatio-temporal dependencies in the data.Finally,the experimental validation on four real traffic flow datasets shows that ST-WaveMLP has better convergence and better prediction accuracy,with a relative improvement of up to 9.57% in prediction accuracy and up to 30.6% in model convergence speed compared to the previous best method.
-
Category-specific and Diverse Shapelets Extraction for Time Series Based on Adversarial Strategies
罗颖, 万源, 王礼勤. 基于对抗策略类别特定的多样性时间序列shapelets提取[J]. 计算机科学, 2024, 51(5): 35-44.
LUO Ying, WAN Yuan, WANG Liqin. Category-specific and Diverse Shapelets Extraction for Time Series Based on Adversarial Strategies[J]. Computer Science, 2024, 51(5): 35-44. - LUO Ying, WAN Yuan, WANG Liqin
- Computer Science. 2024, 51 (5): 35-44. doi:10.11896/jsjkx.230200074
-
Abstract
PDF(2780KB) ( 337 )
- References | Related Articles | Metrics
-
For time series classification,the method of classification by extracting the shapelets of time series and has attracted widespread attention due to its high classification accuracy and good interpretability.Most of the existing shapelets-based me-thods learn the shared shapelets for all classes,which can distinguish most classes,but not the unique class.Besides,the shapelets obtained by those models using adversarial strategies have problems like insufficient diversity.In order to solve these problems,this paper proposes a category-specific and diverse shapelets extraction method based on adversarial strategies.This method embeds the category information into the time series,adversarially generates a number of different category-specific shapelets by using the multi-generator module.The diversity of shapelets are guaranteed by imposing a difference constraint,and the last step uses the features obtained by the shapelets transformation to classify the time series.The proposed method is experimentally compared with 5 shapelets-based algorithms and 11 state-of-the-art classification algorithms on 36 time-series datasets.Experimental results show that,compared with 5 shapelets-based algorithms and 11 advanced classification algorithms,the proposed method achieves the best results on 26 and 20 datasets out of 36 datasets,and both achieve the highest average ranks,and its ave-rage classification accuracy is 2.4% higher than other methods at least,and 20% higher at most.Ablation analysis and visualization analysis demonstrate the effectiveness of diversity and category-specific approaches to time series classification.
-
Time-aware Pre-training Method for Sequence Recommendation
陈稳中, 陈红梅, 周丽华, 方圆. 融入时间信息的预训练序列推荐方法[J]. 计算机科学, 2024, 51(5): 45-53.
CHEN Wenzhong, CHEN Hongmei, ZHOU Lihua, FANG Yuan. Time-aware Pre-training Method for Sequence Recommendation[J]. Computer Science, 2024, 51(5): 45-53. - CHEN Wenzhong, CHEN Hongmei, ZHOU Lihua, FANG Yuan
- Computer Science. 2024, 51 (5): 45-53. doi:10.11896/jsjkx.230200049
-
Abstract
PDF(2607KB) ( 350 )
- References | Related Articles | Metrics
-
Sequence recommendation aims to learn users' dynamic preferences and recommend the next items which users may be interested in by analyzing historical interaction sequences between users and items.The pre-training model has attracted attentions from researchers in sequence recommendation due to its advantage of being adapted for downstream tasks.The existing pre-training methods for sequence recommendation ignore the impact of time on user interaction behaviors in real life.To better capture the time semantics of interactions between users and items,this paper proposes a novel model TPTS-Rec(time-aware pre-training method for sequence recommendation).First,the time embedding matrix is introduced in the embedding layer to obtain the correlations between items and time in user interaction sequences.Then,the same time sampling method is presented in the self-attention layer to learn the time correlations between items.Finally,in the fine-tuning stage,user interaction sequences are amplified from the time dimension to alleviate the data sparsity.Experiment results on real datasets show that the proposed TPTS-Rec model outperforms the baseline models.
-
Graph Contrast Learning Based Multi-graph Neural Network for Session-based RecommendationMethod
卢敏, 原子婷. 结合图对比学习的多图神经网络会话推荐方法[J]. 计算机科学, 2024, 51(5): 54-61.
LU Min, YUAN Ziting. Graph Contrast Learning Based Multi-graph Neural Network for Session-based RecommendationMethod[J]. Computer Science, 2024, 51(5): 54-61. - LU Min, YUAN Ziting
- Computer Science. 2024, 51 (5): 54-61. doi:10.11896/jsjkx.230300092
-
Abstract
PDF(2737KB) ( 352 )
- References | Related Articles | Metrics
-
Session recommendation predicts the next interaction item based on anonymous user interaction data over a short pe-riod of time.Sessions have characteristics such as few items and long-tail distribution of items.Existing session recommendation models based on graph contrast learning construct positive and negative samples by randomly cropping and perturbing the items within a session,etc.However,the above random exit strategy further shrinks the available items in shorter sessions.This makes the sessions more sparse and causes session interest learning bias.To this end,a graph contrast learning based multi-graph neural network for session-based recommendation method is proposed.The core idea is as follows:the model extracts item representations on item local graphs as well as item global graphs,incorporating both local and global higher-order neighborhood information of the items.Based on this,the model generates item-level session representations.Then,Session-level session representations are learned on the session-session graph.Finally,the model recursively generates positive and negative sample pairs using diffe-rent levels of conversational interest.And the discriminative nature of the session interests is enhanced by the contrast learning mechanism.Compared with the exit strategy,the proposed model preserves the complete session information and achieves true data expansion.Extensive experiments on two benchmark datasets show that the recommendation performance of the algorithm is much better than that of the mainstream baseline approach.
-
Substation Equipment Malfunction Alarm Algorithm Based on Dual-domain Sparse Transformer
张建亮, 李洋, 朱春山, 薛泓林, 马军伟, 张丽霞, 毕胜. 基于双域稀疏Transformer的变电站设备故障预警方法[J]. 计算机科学, 2024, 51(5): 62-69.
ZHANG Jianliang, LI Yang, ZHU Qingshan, XUE Hongling, MA Junwei, ZHANG Lixia, BI Sheng. Substation Equipment Malfunction Alarm Algorithm Based on Dual-domain Sparse Transformer[J]. Computer Science, 2024, 51(5): 62-69. - ZHANG Jianliang, LI Yang, ZHU Qingshan, XUE Hongling, MA Junwei, ZHANG Lixia, BI Sheng
- Computer Science. 2024, 51 (5): 62-69. doi:10.11896/jsjkx.230300001
-
Abstract
PDF(2800KB) ( 327 )
- References | Related Articles | Metrics
-
Using the time series data generated during the operation of substation electrical equipment,a predictive model can be constructed for its future operating state,thereby detecting abnormal data in advance,eliminating hidden faults,and improving stability and reliable operation ability.The Transformer model is an emerging sequential data processing model that has advantages when dealing with longer sequences and can meet the forward-looking needs of malfunction alarm.However,the model structure of Transformer makes it difficult to be directly applied to malfunction alarm tasks due to its high computational complexity and space occupancy.Therefore,a Transformer equipment malfunction alarm method based on time series prediction is proposed,which improves the Transformer model to achieve modeling of equipment operation data.The model uses a dual-tower encoder structure to extract features of sequences in both frequency and time domains,and performs multi-dimensional data fusion on time feature data and space feature data to extract more detailed information.Secondly,sparse attention mechanism is used instead of standard attention mechanism to reduce the computational complexity and space occupancy rate of Transformer and meet the needs of real-time warning.The superiority of the proposed model and the necessity of the improved module are demonstrated by experiments on ETT transformer equipment dataset.Compared with other methods,the proposed model achieves optimal MSE and MAE indices in most prediction tasks,especially in long sequence prediction tasks,and has faster prediction speed.
-
Diversified Top-k Pattern Mining on Large Graphs
何宇昂, 王欣, 沈玲珍. 大图中多样化Top-k模式挖掘算法研究[J]. 计算机科学, 2024, 51(5): 70-84.
HE Yuang, WANG Xin, SHEN Lingzhen. Diversified Top-k Pattern Mining on Large Graphs[J]. Computer Science, 2024, 51(5): 70-84. - HE Yuang, WANG Xin, SHEN Lingzhen
- Computer Science. 2024, 51 (5): 70-84. doi:10.11896/jsjkx.230300003
-
Abstract
PDF(5498KB) ( 285 )
- References | Related Articles | Metrics
-
Frequent pattern mining(FPM) is one of the most important problems in graph mining.The FPM problem is defined as mining all the patterns,with frequency above a user-defined threshold in a large graph.In recent years,with the popularity of social networks and so on,single-graph-based FPM has received more and more attention.Investigators have developed considerable techniques,while most of them suffer from high computational cost,inconvenient result inspection and inconvenient in parallel computation.To tackle the issues,this paper proposes an approach to discover diversified top-k patterns from singe large graphs.This paper first designs a diversification function to measure the diversity of patterns,then develops a distributed algorithm with early termination property named DisTopk,to efficiently identify diversified top-k patterns,from distributive stored graphs.Expe-rimental results conducted on real-life and synthetic graphs show that DisTopk can mine diversified top-k patterns more efficiently than traditional algorithms.
-
Cross-modal Information Filtering-based Networks for Visual Question Answering
何世阳, 王朝晖, 龚声蓉, 钟珊. 基于跨模态信息过滤的视觉问答网络[J]. 计算机科学, 2024, 51(5): 85-91.
HE Shiyang, WANG Zhaohui, GONG Shengrong, ZHONG Shan. Cross-modal Information Filtering-based Networks for Visual Question Answering[J]. Computer Science, 2024, 51(5): 85-91. - HE Shiyang, WANG Zhaohui, GONG Shengrong, ZHONG Shan
- Computer Science. 2024, 51 (5): 85-91. doi:10.11896/jsjkx.230300202
-
Abstract
PDF(3124KB) ( 311 )
- References | Related Articles | Metrics
-
As a multi-modal task,the bottleneck of visual question answering(VQA) is to solve the problem of fusion between different modes.It requires not only a full understanding of vision and text in the image,but also the ability to align cross-modal representation.The introduction of the attention mechanism provides an effective path for multi-mode fusion.However,the pre-vious methods usually calculate the extracted image features directly,ignoring the noise and incorrect information contained in the image features,and most of the methods are limited to the shallow interaction between modes,without considering the deep semantic information between modes.To solve this problem,a cross-modal information filtering network(CIFN) is proposed.Firstly,the feature of the problem is taken as the supervision signal,and the information filtering module is designed to filter the feature information of the image,so that it can better fit the representation of the problem.Then the image features and problem features are sent to the cross-modal interaction layer,and the intra-modal and inter-modal relationships are modeled respectively under the action of self-attention and guided attention,so as to obtain more fine-grained multi-modal features.Extensive experiments have been conducted on VQA2.0 data sets,and the experimental results show that the introduction of information filtering mo-dule effectively improves the model accuracy,and the overall accuracy of test-std reaches 71.51%,which has good performance compared with the most advanced methods.
-
Multi-stage Intelligent Color Restoration Algorithm for Black-and-White Movies
宋建锋, 张文英, 韩露, 胡国正, 苗启广. 一种多阶段的黑白影像智能色彩修复算法[J]. 计算机科学, 2024, 51(5): 92-99.
SONG Jianfeng, ZHANG Wenying, HAN Lu, HU Guozheng, MIAO Qiguang. Multi-stage Intelligent Color Restoration Algorithm for Black-and-White Movies[J]. Computer Science, 2024, 51(5): 92-99. - SONG Jianfeng, ZHANG Wenying, HAN Lu, HU Guozheng, MIAO Qiguang
- Computer Science. 2024, 51 (5): 92-99. doi:10.11896/jsjkx.231100067
-
Abstract
PDF(6127KB) ( 321 )
- References | Related Articles | Metrics
-
In the process of colorization for black and white movies,the existing automatic colorization models often produce singular result,and the reference-based colorization methods require users to specify reference images,posing a significant challenge in meeting the high requirement of reference images and consuming substantial human efforts.To address this issue,this paper proposes a multi-stage intelligent color restoration algorithm for black-and-white movies (MSICRA).Firstly,the movie is splitted into multiple scene segments by the VGG19 network. Secondly,each scene segment is cut frame by frame,and the edge intensity and grayscale difference of each frame image are used as criteria to assess image clarity,selecting images with clarity ranging from 0.95 to 1 in each scene.Subsequently,we select the first frame that meets the clarity criteria from the filtered images and apply different render factor values to colorize the selected image.We assess the colorization effects using saturation and choose the appropriate render factor for the colorization.Finally,we use the mean squared error between the pre-colorized and post-colorized images to select the best quality colorized image as the reference for the scene segment.Experimental results demonstrate that the proposed algorithm improves the PSNR by 1.32% for the black and white films Lei Feng and 2.15% for The Eternal Wave,and the SSIM by 1.84% and 1.04% respectively.The algorithm not only enables fully automatic colorization but also produces realistic colors that align with human perception.
-
Medical Image Segmentation Network Integrating Full-scale Feature Fusion and RNN with Attention
单昕昕, 李凯, 文颖. 集成全尺度融合和循环注意力的医学图像分割网络[J]. 计算机科学, 2024, 51(5): 100-107.
SHAN Xinxin, LI Kai, WEN Ying. Medical Image Segmentation Network Integrating Full-scale Feature Fusion and RNN with Attention[J]. Computer Science, 2024, 51(5): 100-107. - SHAN Xinxin, LI Kai, WEN Ying
- Computer Science. 2024, 51 (5): 100-107. doi:10.11896/jsjkx.230400114
-
Abstract
PDF(2701KB) ( 371 )
- References | Related Articles | Metrics
-
The encoder-decoder network in deep learning has excellent performance in image feature extraction and hierarchical feature fusion,and is often used in medical image segmentation.However,the current mainstream encoding and decoding network segmentation methods still face two problems:1)in encoding and decoding stages,image feature information mined by a single network may be insufficient;2)encoder-decoder networks using simple skip connections cannot fully exploit the contextual information of full-scale features.Therefore,aiming at the shortcomings of the existing methods,an encoder-decoder network integrating full-scale feature fusion and RNN with attention for medical image segmentation is proposed.At first,the convolutional multi-layer perceptron(MLP) module combined with MLP is introduced in U-Net encoder to further expand the feature receptive field of the encoder.Secondly,by the full-scale feature fusion module,the skip connection features of each scale are effectively fused with coarse-grained information and fine-grained information.This operation reduces the semantic difference between the skip-connection features of each scale and highlights the key feature information of the image.Finally,the decoder refines the image feature information level by level through the proposed recurrent attention decoding module(RADU) combining recurrent neural network(RNN) and attention mechanism,which strengthens feature extraction while avoiding information redundancy,and obtains the final segmentation results.The proposed method is compared with the mainstream algorithms on BrainWeb,MRbrainS,HVSMR and Choledoch datasets,the image segmentation precision is improved in pixel accuracy and dice similarity coefficient.Therefore,experimental results show that by introducing the full-scale feature fusion module and the proposed RADU,the proposed method can achieve excellent segmentation results in image segmentation applications and has good noise robustness and anti-interference ability.
-
Partial Near-duplicate Video Detection Algorithm Based on Transformer Low-dimensionalCompact Coding
王萍, 余圳煌, 鲁磊. 基于Transformer紧凑编码的局部近重复视频检测算法[J]. 计算机科学, 2024, 51(5): 108-116.
WANG Ping, YU Zhenhuang, LU Lei. Partial Near-duplicate Video Detection Algorithm Based on Transformer Low-dimensionalCompact Coding[J]. Computer Science, 2024, 51(5): 108-116. - WANG Ping, YU Zhenhuang, LU Lei
- Computer Science. 2024, 51 (5): 108-116. doi:10.11896/jsjkx.230300232
-
Abstract
PDF(3545KB) ( 326 )
- References | Related Articles | Metrics
-
To address the issues of existing partial near-duplicate video detection algorithms,such as high storage consumption,low query efficiency,and feature extraction module that does not consider subtle semantic differences between near-duplicate frames,this paper proposes a partial near-duplicate video detection algorithm based on Transformer.First,a Transformer-based feature encoder is proposed,which canlearn subtle semantic differences between a large number of near-duplicate frames.The feature maps of frame regions are introduced with self-attention mechanism during frame feature encoding,effectively reducing the dimensionality of the feature while enhancing its representational capacity.The feature encoder is trained using a siamese network,which can effectively learn the semantic similarities between near-duplicate frames without negative samples.This eliminates the need for heavy and difficult negative sample annotation work,making the training process simpler and more efficient.Secondly,a key frame extraction method based on video self-similarity matrix is proposed.This method can extract rich,non-redundant key frames from the video,allowing for a more comprehensive description of the original video content and improved algorithm performance.Additionally,this approach significantly reduces the overhead associated with storing and computing redundant key frames.Finally,a graph network-based temporal alignment algorithm is used to detect and locate partial near-duplicate video clips based on the low-dimensional,compact encoded features of key frames.The proposed algorithm achieves impressive experimental results on the publicly available partial near-duplicate video detection dataset VCDB and outperforms existing algorithms.
-
Multi Scale Progressive Transformer for Image Dehazing
周宇, 陈志华, 盛斌, 梁磊. 基于渐进式多尺度Transformer的图像去雾算法[J]. 计算机科学, 2024, 51(5): 117-124.
ZHOU Yu, CHEN Zhihua, SHENG Bin, LIANG Lei. Multi Scale Progressive Transformer for Image Dehazing[J]. Computer Science, 2024, 51(5): 117-124. - ZHOU Yu, CHEN Zhihua, SHENG Bin, LIANG Lei
- Computer Science. 2024, 51 (5): 117-124. doi:10.11896/jsjkx.230300049
-
Abstract
PDF(4574KB) ( 354 )
- References | Related Articles | Metrics
-
In order to simultaneously recover image details and maintain global information in the dehazed image,a multi scale progressive transformer(MSP-Transformer) is proposed for image dehazing.The MSP-Transformer can effectively extract haze-related features from different scales,and restore clear image in a progressive way,achieving multi-scale learning and fusion of features and images.The proposed MSP-Transformer is divided into an encoding stage,a decoding stage,and a restoration stage.In the encoding stage,a Transformer block-based encoder is used to decompose the input image into different scales.The extracted haze-relevant features from different scales can fully characterize the information loss of the haze image.In the decoding stage,considering that different regions of the haze image have different information loss,this paper designs a feature aggregation module containing a multi-scale attention mechanism in decoder.The multi-scale attention contains channel attention and multi-scale spatial attention,and can fuse the feature information from different scales.The restoration stage contains restoration block and fusion block,firstly,the multi-scale feature fusion restoration block aggregates the haze relevant features from different scales to increase the association between these features,then the aggregated features are used to restore a haze-free image at each scale.Besides,the restored images from each scale are fused by fusion block to obtain the final dehazed result.Qualitative and quantitative experiments on both real and synthetic datasets show that the proposed MSP-Transformer has good dehazing performance.Compared with 11 state-of-the-art methods,MSP-Transformer obtains the best PSNR(39.53db) and SSIM(0.9954) on the RESIDE dataset,and achieves good visual effect.In addition,the ablation experiments also demonstrate the effectiveness of the proposed dehazing method.
-
Salient Object Detection Based on Feature Attention Purification
白雪飞, 申悟呈, 王文剑. 基于特征注意力提纯的显著性目标检测模型[J]. 计算机科学, 2024, 51(5): 125-133.
BAI Xuefei, SHEN Wucheng, WANG Wenjian. Salient Object Detection Based on Feature Attention Purification[J]. Computer Science, 2024, 51(5): 125-133. - BAI Xuefei, SHEN Wucheng, WANG Wenjian
- Computer Science. 2024, 51 (5): 125-133. doi:10.11896/jsjkx.230300018
-
Abstract
PDF(3368KB) ( 329 )
- References | Related Articles | Metrics
-
In recent years,salient object detection technology has made great progress,and how to select and effectively integrate multi-scale features plays an important role.Aiming at the information redundancy problem that may be caused by existing feature integration methods,a saliency detection model based on feature attention refinement is proposed.First,in the decoder,a global feature attention guidance module(GAGM) is used to process the deep features with semantic information through the attention mechanism to obtain global context information,and then these information is sent to each layer of the decoder for supervision through the global guidance flow train.The multi-scale features extracted by the encoder and the global context information are then effectively integrated using the multi-scale feature aggregation module(FAM),and further refined in the mesh feature purification module(MFPM) to generate clear and complete salient features.Experimental results on 5 public datasets demonstrate that the proposed model outperforms other existing saliency object detection methods.Besides,the processing speed of our approach is also very fast,it can run at a speed of more than 30 FPS when processing a 320 × 320 image.
-
Study on Building Extraction from Remote Sensing Image Based on Multi-scale Attention
赫晓慧, 周涛, 李盼乐, 常静, 李加冕. 基于多尺度注意力的遥感影像建筑物提取研究[J]. 计算机科学, 2024, 51(5): 134-142.
HE Xiaohui, ZHOU Tao, LI Panle, CHANG Jing, LI Jiamian. Study on Building Extraction from Remote Sensing Image Based on Multi-scale Attention[J]. Computer Science, 2024, 51(5): 134-142. - HE Xiaohui, ZHOU Tao, LI Panle, CHANG Jing, LI Jiamian
- Computer Science. 2024, 51 (5): 134-142. doi:10.11896/jsjkx.230200134
-
Abstract
PDF(6283KB) ( 313 )
- References | Related Articles | Metrics
-
Building extraction from remote sensing images based on deep learning has the characteristics of wide coverage and high computational efficiency,and it plays an important role in urban construction,disaster prevention and other aspects.Most of the mainstream methods use multi-scale feature fusion to enable the neural network to learn more abundant semantic information.However,due to the complexity of multi-scale features and the interference of other ground objects,this kind of methods often lead to target missing and noise-intensive.To this end,this paper proposes a feature interpretation model MGA-ResNet50(MGAR) that combines attention mechanism.The core of the method is to use the multihead attention to process the hierarchical weighting of high-level semantic information,so as to extract the optimal feature combination with relatively better representation effect.Then use the gating structure to fuse the feature map of each dimension with the low-level semantic information of the corresponding encoder to compensate for the loss of local building details.Experimental results on public datasets such as Massachusetts Building and WHU Building show that the proposed algorithm can achieve higher F1 and IoU than the more advanced multi-scale feature fusion methods such as RAPNet,GAMNet and GSM.
-
Salient Object Detection Method Based on Multi-scale Visual Perception Feature Fusion
吴小琴, 周文俊, 左承林, 王一帆, 彭博. 基于多尺度视觉感知特征融合的显著目标检测方法[J]. 计算机科学, 2024, 51(5): 143-150.
WU Xiaoqin, ZHOU Wenjun, ZUO Chenglin, WANG Yifan, PENG Bo. Salient Object Detection Method Based on Multi-scale Visual Perception Feature Fusion[J]. Computer Science, 2024, 51(5): 143-150. - WU Xiaoqin, ZHOU Wenjun, ZUO Chenglin, WANG Yifan, PENG Bo
- Computer Science. 2024, 51 (5): 143-150. doi:10.11896/jsjkx.230100132
-
Abstract
PDF(4634KB) ( 391 )
- References | Related Articles | Metrics
-
Salient object detection has important theoretical research significance and practical application value,and has played an important role in many computer vision applications,such as visual tracking,image segmentation and object recognition.How-ever,the unknown categories and variable scales of salient objects in natural environments are still a major challenge for salient object detection,which affects the detection results.Therefore,this paper proposes a salient object detection method based on multi-scale visual perception feature fusion.First,based on the characteristics of visual perception,multiple perceptual features are designed and extracted.Second,each perceptual feature adopts a multi-scale adaptive method to obtain feature saliency maps.Finally,each salient feature map is fused to obtain the final salient object.According to the characteristics of different image perception features,the proposed method adaptively extracts feature salient objects,and can adapt to changing detection objects and complex detection environments.Experimental results show that this method can effectively detect salient objects of unknown categories and different scales under the background interference of natural environment.
-
Hyperspectral Image Recovery Model Based on Bi-smoothing Function Rank Approximation andGroup Sparse
姜斌, 叶军, 张历洪, 司伟纳. 基于双平滑函数秩近似和群稀疏的高光谱图像恢复模型[J]. 计算机科学, 2024, 51(5): 151-161.
JIANG Bin, YE Jun, ZHANG Lihong, SI Weina. Hyperspectral Image Recovery Model Based on Bi-smoothing Function Rank Approximation andGroup Sparse[J]. Computer Science, 2024, 51(5): 151-161. - JIANG Bin, YE Jun, ZHANG Lihong, SI Weina
- Computer Science. 2024, 51 (5): 151-161. doi:10.11896/jsjkx.230200044
-
Abstract
PDF(6236KB) ( 293 )
- References | Related Articles | Metrics
-
Hyperspectral image(HSI) has good spectral recognition capabilities and is widely used in various fields.However,HSI is susceptible to mixed noise pollution during imaging,which will seriously weaken the accuracy of subsequent tasks,and how to recover HSI with high quality is the first problem that needs to be solved.At present,the HSI denoising methods based on the combination of low-rank prior and total variational regularization have achieved good performance.On the one hand,these methods ignore the characteristics of high-intensity stripe noise in spatial structure and spectral distribution,so that the noise cannot be completely removed,and on the other hand,they do not consider the information of low-rank subspace of HSI differential images,then cannot explore the potential local spatial smooth structure.In order to solve these problems,an HSI recovery model based on bi-smoothing function rank approximation and group sparse is proposed.Firstly,the bi-smoothing function rank approximation model is used to explore the low-rank structure of clean HSI and stripe noise,which can remove high-intensity mixed noise such as structured stripe noise.Secondly,the group sparse regularization based on E3DTV is integrated into the bi-smoothing function rank approximation model,which can fully exploit the sparse prior information of HSI differential images and further improves the performance of images in spatial recovery and spectral feature retention.Finally,an alternating direction multiplier method(ADMM) is designed to solve the proposed BSRAGS model.Simulation and real data experiments show that the proposed model can effectively improve the image restoration quality.
-
3D Object Detection Based on Edge Convolution and Bottleneck Attention Module for Point Cloud
简英杰, 杨文霞, 方玺, 韩欢. 基于边卷积与瓶颈注意力的点云三维目标检测[J]. 计算机科学, 2024, 51(5): 162-171.
JIAN Yingjie, YANG Wenxia, FANG Xi, HAN Huan. 3D Object Detection Based on Edge Convolution and Bottleneck Attention Module for Point Cloud[J]. Computer Science, 2024, 51(5): 162-171. - JIAN Yingjie, YANG Wenxia, FANG Xi, HAN Huan
- Computer Science. 2024, 51 (5): 162-171. doi:10.11896/jsjkx.230300113
-
Abstract
PDF(3474KB) ( 294 )
- References | Related Articles | Metrics
-
Due to the highly sparsity of point cloud data,current 3D object detection methods based on point cloud are inadequate for learning local features,and some invalid information contained in point cloud data can interfere with object detection.To address the above problems,a 3D object detection model based on edge convolution(EdgeConv) and bottleneck attention module(BAM) is proposed.First,by creating a K-nearest-neighbor graph structure for each point in point clouds on the feature space,multilayer edge convolutions are constructed to learn the multi-scale local features of point clouds.Second,a bottleneck attention module(BAM) is designed for 3D point cloud data,and each BAM consists of a channel attention module and a spatial attention module to enhance the point cloud information that is valuable for object detection,aiming to strengthen the feature representation of the proposed model.The network uses VoteNet as the baseline,and multilayer edge convolutions and BAM are added sequentially between PointNet++ and the voting module.The proposed model is evaluated and compared with other 13 state-of-the-art methods on two benchmark datasets SUN RGB-D and ScanNetV2.Experimental results demonstrate that on SUN RGB-D dataset,the proposed model achieves the highest mAP@0.5,and the highest AP@0.25 for six out of ten categories such as bed,chair and desk.On ScanNetV2 dataset,this model outperforms other 13 methods in terms of mAP under both IoU 0.25 and 0.5,and achieves the highest AP@0.25 for ten out of eighteen categories such as chair,sofa and picture.As compared to the baseline VoteNet,the mAP@0.25 of the proposed model improves by 6.5% and 12.9% respectively on two datasets.Ablation studies are conducted to verify the contributions of each component.
-
Multi-label Patent Classification Based on Text and Historical Data
徐雪洁, 王宝会. 基于文本及历史数据的多标签专利分类算法研究[J]. 计算机科学, 2024, 51(5): 172-178.
XU Xuejie, WANG Baohui. Multi-label Patent Classification Based on Text and Historical Data[J]. Computer Science, 2024, 51(5): 172-178. - XU Xuejie, WANG Baohui
- Computer Science. 2024, 51 (5): 172-178. doi:10.11896/jsjkx.230200199
-
Abstract
PDF(2857KB) ( 394 )
- References | Related Articles | Metrics
-
Patent classification,which is used to assign multiple international patent classification(IPC) codes to a given paten,is a very important task int the field of patent data mining.In recent years,many studies on this task focus on mining patent text to predict the first or second level codes for IPC.In real scenarios,a patent often has multiple IPC codes which is a multi-label classification task.Apart from the texts,each patent has a corresponding assignee and the assignee's historical patent application behavior may have a certain business tendency.The preference representation of this behavior can effectively improve the precision of patent classification.However,previous methods fail to make full use of patent historical data.A classification model is proposed for patent automatic classification.Main processing of this model is as follows:firstly,initialize the patent text representation with BERT pretraining language model,then use Text-CNN model to capture local features and take the output as the final patent text representation;secondly,Bi-LSTM is used to learn the preference representation by aggregating historical patent texts and labels through dual channels;finally,we fuse the texts and assignee's sequential preferences for prediction.Experiments on real data set and comparisons with different baselines show that the proposed patent classification algorithm based on patent text and historical data has a great improvement in precision.
-
Multi-agent Reinforcement Learning Algorithm Based on AI Planning
辛沅霞, 华道阳, 张犁. 基于智能规划的多智能体强化学习算法[J]. 计算机科学, 2024, 51(5): 179-192.
XIN Yuanxia, HUA Daoyang, ZHANG Li. Multi-agent Reinforcement Learning Algorithm Based on AI Planning[J]. Computer Science, 2024, 51(5): 179-192. - XIN Yuanxia, HUA Daoyang, ZHANG Li
- Computer Science. 2024, 51 (5): 179-192. doi:10.11896/jsjkx.230800099
-
Abstract
PDF(9040KB) ( 316 )
- References | Related Articles | Metrics
-
At present,deep reinforcement learning algorithms have made a lot of achievements in various fields.However,in the field of multi-agent task,agents are often faced with non-stationary environment with larger state-action space and sparse rewards,low exploration efficiency is still a big challenge.Since AI planning can quickly obtain a solution according to the initial state and target state of the task,this solution can serve as the initial strategy of each agent and provide effective guidance for its exploration process,it is attempted to combine them and propose a unified model for multi-agent reinforcement learning and AI planning(UniMP).On the basis of it,the solution mechanism of the problem can be designed and implemented.By transforming the multi-agent reinforcement learning task into an intelligent decision task,and performing heuristic search on it,a set of macroscopic goals will be obtained,which can guide the training process of reinforcement learning,so that agents can conduct more efficient exploration.Finally,experiments are carried out under the various maps of multi-agent real-time strategy game StarCraft II and RoboMaster AI Challenge Simulator 2D.The results show that the cumulative reward value and win rate are significantly improved,which verifies the feasibility of UniMP,the effectiveness of solution mechanism and the ability of our algorithm to flexibly deal with the sudden situation of reinforcement learning environment.
-
New Graph Reduction Representation and Graph Neural Network Model for Premise Selection
兰咏琪, 何星星, 李莹芳, 李天瑞. 面向前提选择的新型图约简表示与图神经网络模型[J]. 计算机科学, 2024, 51(5): 193-199.
LAN Yongqi, HE Xingxing, LI Yingfang, LI Tianrui. New Graph Reduction Representation and Graph Neural Network Model for Premise Selection[J]. Computer Science, 2024, 51(5): 193-199. - LAN Yongqi, HE Xingxing, LI Yingfang, LI Tianrui
- Computer Science. 2024, 51 (5): 193-199. doi:10.11896/jsjkx.230300193
-
Abstract
PDF(1988KB) ( 287 )
- References | Related Articles | Metrics
-
The search space of an automatic theorem prover usually grow explosively when it proves problems.Premise selection provides a new solution to this problem.Aiming at the problem that the logical formula graph and graph neural network models in the existing premise selection methods are difficult to capture the potential information of the formula graph,this paper proposes a simplified logical formula graph representation based on removing repeated quantifiers and a term-walk graph neural network with attention mechanism,which makes full use of the syntactic and semantic information of logical formulas to improve the classification accuracy of premise selection problems.Firstly,the conjecture formulas and premise formulas are transformed into simplified first-order logic formula graphs based on removing repeated quantifiers.Secondly,the message passing graph neural network is used to aggregate and update nodes and their term walk feature information,and then the attention mechanism is used to assign weights to nodes on the graph,so as to adjust the graph nodes embedding information.Finally,the premise vector and the conjecture vector are concatenated and input into the binary classifier to realize the classification.Experimental results show that the accuracy of the proposed method on MPTP dataset and CNF dataset reaches 88.61% and 84.74%,respectively,which surpasses the existing premise selection method.
-
Combining Syntactic Enhancement with Graph Attention Networks for Aspect-based Sentiment Classification
张泽宝, 余翰男, 王勇, 潘海为. 结合句法增强与图注意力网络的方面级情感分类[J]. 计算机科学, 2024, 51(5): 200-207.
ZHANG Zebao, YU Hannan, WANG Yong, PAN Haiwei. Combining Syntactic Enhancement with Graph Attention Networks for Aspect-based Sentiment Classification[J]. Computer Science, 2024, 51(5): 200-207. - ZHANG Zebao, YU Hannan, WANG Yong, PAN Haiwei
- Computer Science. 2024, 51 (5): 200-207. doi:10.11896/jsjkx.230200189
-
Abstract
PDF(2230KB) ( 326 )
- References | Related Articles | Metrics
-
Aspect-level sentiment classification aims to identify the emotional polarity of a given aspect text.In this field,the combination of graph neural network and syntactic dependency parsing is one of the current hot research directions.Based on the relationship between them,the graph structure is constructed and input into the graph neural network to obtain the emotional polarity.If the syntax parser makes a parsing error,the impact on the graph-based graph neural network model will be huge.In order to enhance the parsing results of the syntactic dependency tree generated by the parser,a syntactically enhanced graph attention network is proposed.By fusing the parsing results of multiple parsers,the parsing accuracy of syntactic dependencies is improved,and a more accurate dependency syntactic graph is obtained.A densely connected mechanism is used in graph attention networks to capture richer features,which are more suitable for enhanced syntactic graphs,and the aspect attention mechanism is introduced to capture aspect semantic features.Experimental results verify the effectiveness of the syntactic enhancement method.The classification accuracy on the three benchmark datasets has been improved,and it has a better performance in the field of aspect-level sentiment analysis.
-
Multilingual Event Detection Based on Cross-level and Multi-view Features Fusion
张志远, 张维彦, 宋雨秋, 阮彤. 基于跨层级多视角特征的多语言事件探测[J]. 计算机科学, 2024, 51(5): 208-215.
ZHANG Zhiyuan, ZHANG Weiyan, SONG Yuqiu, RUAN Tong. Multilingual Event Detection Based on Cross-level and Multi-view Features Fusion[J]. Computer Science, 2024, 51(5): 208-215. - ZHANG Zhiyuan, ZHANG Weiyan, SONG Yuqiu, RUAN Tong
- Computer Science. 2024, 51 (5): 208-215. doi:10.11896/jsjkx.230200131
-
Abstract
PDF(2890KB) ( 241 )
- References | Related Articles | Metrics
-
The goal of the multilingual event detection task is to organize a collection of news documents in multiple languages into different key events,where each event can include news documents in different languages.This task facilitates various downstream task applications,such as multilingual knowledge graph construction,event reasoning,information retrieval,etc.At pre-sent,multilingual event detection is mainly divided into two methods:translation first and then event detection,and single language detection first and then alignment across multiple languages.The former relies on the effect of translation while the latter requires a separate training model for each language.To this end,this paper proposes a multilingual event detection method based on cross-level multi-view feature fusion,which performs end-to-end multilingual event detection tasks.This method uses the multi-view features of documents from different levels to obtain high reliability.It improves the generalization performance of low-resource language event detection.Experiments on a news dataset with a mixture of nine languages show that the proposed method improves the BCubed F1 value by 4.63%.
-
Government Event Dispatch Approach Based on Deep Multi-view Network
李子琛, 易修文, 陈顺, 张钧波, 李天瑞. 基于深度多视图网络的政务事件分拨方法[J]. 计算机科学, 2024, 51(5): 216-222.
LI Zichen, YI Xiuwen, CHEN Shun, ZHANG Junbo, LI Tianrui. Government Event Dispatch Approach Based on Deep Multi-view Network[J]. Computer Science, 2024, 51(5): 216-222. - LI Zichen, YI Xiuwen, CHEN Shun, ZHANG Junbo, LI Tianrui
- Computer Science. 2024, 51 (5): 216-222. doi:10.11896/jsjkx.230300034
-
Abstract
PDF(2112KB) ( 289 )
- References | Related Articles | Metrics
-
The 12345 Government Affairs Service Convenience Hotline is a public service platform set up by local governments to handle hotline events.In recent years,with the advancement of government digitization,the significance of the 12345 hotline as a communication link between citizens and government has greatly increased,and there are higher and higher requirements for the efficiency of event handling.Aiming at the problems that the traditional event dispatch method mainly relies on the manual operation of the dispatcher,which is slow in speed,low in accuracy,and consumes a lot of human resources,a government event dispatch method based on deep multi-view network is proposed.Firstly,we train the graph convolutional neural network with weights by self-supervised learning and extract the behavioral representations of event category-dispatched departments from the historical assignment records.After that,the BERT model fine-tuned by the government domain corpus is used to extract the semantic representation of the event description and event title.Then,the residual network based on the attention mechanism is used to fuse multiple views of the event to obtain the fusion representation of the event.Finally,the fusion representation is fed into the classifier to obtain the result of event dispatch.Experiments on the dataset of Nantong 12345 hotline show that the proposed method is superior to other baseline methods in terms of various metrics and can improve the efficiency of event dispatch.
-
Adaptive Context Matching Network for Few-shot Knowledge Graph Completion
杨旭华, 张炼, 叶蕾. 基于自适应上下文匹配网络的小样本知识图谱补全[J]. 计算机科学, 2024, 51(5): 223-231.
YANG Xuhua, ZHANG Lian, YE Lei. Adaptive Context Matching Network for Few-shot Knowledge Graph Completion[J]. Computer Science, 2024, 51(5): 223-231. - YANG Xuhua, ZHANG Lian, YE Lei
- Computer Science. 2024, 51 (5): 223-231. doi:10.11896/jsjkx.230200012
-
Abstract
PDF(2277KB) ( 368 )
- References | Related Articles | Metrics
-
The knowledge graph needs to face complicated real world information in construction process,and cannot model all knowledge,so it needs to be completed.Many relations in real knowledge graph often have only few entity pairs for training.Therefore,few-shot knowledge graph completion is a very significant problem.At present,embedding-based methods generally aggregate entity context information through attention mechanism or other methods,and complete knowledge graph by learning relation embeddings.These methods only consider the matching degree at relation level.Although they can predict unknown relations,the result is often not accurate.Therefore,an adaptive context matching network(ACMN) is proposed for few-shot know-ledge graph completion.Firstly,a common-neighbor awareness-encoder is proposed to aggregate the references context,that is,one-hop neighbor entities,and obtain common-neighbor awareness embeddings.Secondly,a task-related entity encoder is proposed to mine the similarity information between task entity context and common context,distinguish the contribution of one-hop neighbors to the current task,and enhance task entity representation.Then a context-relation encoder is proposed to obtain dynamic relation representations.Finally,the matching degree of entity context and relations is comprehensively considered through weighted summation to complete the completion.ACMN comprehensively evaluates whether the query triples are tenable from two aspects of entity context similarity and relations matching,which can effectively improve the prediction accuracy in few-shot scena-rios.Compared with other eight widely used algorithms on the two public data sets,ACMN achieves the best completion results in the case of different few-shot sizes.
-
Linear Inertial ADMM for Nonseparable Nonconvex and Nonsmooth Problems
刘洋, 刘康, 王永全. 求解不可分离非凸非光滑问题的线性惯性ADMM算法[J]. 计算机科学, 2024, 51(5): 232-241.
LIU Yang, LIU Kang, WANG Yongquan. Linear Inertial ADMM for Nonseparable Nonconvex and Nonsmooth Problems[J]. Computer Science, 2024, 51(5): 232-241. - LIU Yang, LIU Kang, WANG Yongquan
- Computer Science. 2024, 51 (5): 232-241. doi:10.11896/jsjkx.240200027
-
Abstract
PDF(2175KB) ( 287 )
- References | Related Articles | Metrics
-
In this paper,a linear inertial alternating direction multiplier method (LIADMM ) is proposed for the nonconvex non-smooth miniaturisation problem of the objective function containing the coupling function H(x,y),and to facilitate the solution of the subproblems,the objective function is linearised and the inertial effect is introduced into the x-subproblem.To facilitate the solution of the subproblem,the coupling function H(x,y) is linearised in the objective function and the inertial effect is introduced into the x-subproblem.The global convergence of the algorithm is established under appropriate assumptions,and the strong convergence of the algorithm is proved by introducing auxiliary functions satisfying the Kurdyka-Łojasiewicz inequality.Two numerical experiments show that the algorithm with the introduction of inertial effect converges has better convergence performance than the algorithm without inertial effect.
-
Distributed Adaptive Multi-agent Rendezvous Control Based on Average Consensus Protocol
谢光强, 钟必为, 李杨. 基于平均一致协议的分布式自适应多智能体聚集控制[J]. 计算机科学, 2024, 51(5): 242-249.
XIE Guangqiang, ZHONG Biwei, LI Yang. Distributed Adaptive Multi-agent Rendezvous Control Based on Average Consensus Protocol[J]. Computer Science, 2024, 51(5): 242-249. - XIE Guangqiang, ZHONG Biwei, LI Yang
- Computer Science. 2024, 51 (5): 242-249. doi:10.11896/jsjkx.230300159
-
Abstract
PDF(3960KB) ( 293 )
- References | Related Articles | Metrics
-
Distributed rendezvous control is an important issue in multi-agent collaborative control.Due to the limited mobility and perception capabilities of agents,traditional distributed rendezvous algorithms are difficult to ensure connectivity,thereby aggregating multiple clusters.In addition,decentralized large-scale rendezvous control poses a huge challenge to obtaining global rendezvous points.For the connectivity protection problem,based on the average consensus protocol and constraint set,a multi-agent rendezvous protocol with connectivity constraints(MARP-CC) is proposed.Then,for the rendezvous point unpredictability problem,location synthesis(LSS) and location redirection(LRS) control strategies are proposed.The agent adaptively selects the optimal control strategy for iteration based on the current connectivity situation.Finally,combining these two control strategies,a distributed adaptive multi-agent rendezvous algorithm with connectivity constraints(DAMAR-CC) is proposed.The conver-gence and connectivity analysis of the algorithm are given,and a large number of simulations show that DAMAR-CC can make agents stably rendezvous at the geometric center of the initial topology.
-
Very Short Texts Hierarchical Classification Combining Semantic Interpretation and DeBERTa
陈昊飏, 张雷. 融合语义解释和DeBERTa的极短文本层次分类[J]. 计算机科学, 2024, 51(5): 250-257.
CHEN Haoyang, ZHANG Lei. Very Short Texts Hierarchical Classification Combining Semantic Interpretation and DeBERTa[J]. Computer Science, 2024, 51(5): 250-257. - CHEN Haoyang, ZHANG Lei
- Computer Science. 2024, 51 (5): 250-257. doi:10.11896/jsjkx.231100134
-
Abstract
PDF(1593KB) ( 315 )
- References | Related Articles | Metrics
-
Text hierarchy classification has important applications in scenarios such as social comment topic classification and search term classification.The data in these scenarios often exhibits short text features,which is reflected in the sparsity and sensitivity of information.It poses great challenges for model feature representation and classification performance.The complexity and associativity of the hierarchical label space further exacerbate the difficulties.In view of this,a method fusing semantic interpretation and DeBERTa model is proposed,and the core idea of the method is as follows:introducing the semantic interpretation of individual words or phrases in specific contexts to supplement and optimize the content information acquired by the model;combining the disentangled attention and enhanced mask decoder of the DeBERTa model to better grasp the location information and improve the feature extraction ability.The method firstly performs grammatical disambiguation and lexical annotation on the training text,and then constructs the GlossDeBERTa model to perform semantic disambiguation with high accuracy to obtain the semantic interpreted sequence.Then the SimCSE framework is used to make the interpreted sequence vectorized to better characterize the sentence information in the interpreted sequence.Finally,the training text passes through the DeBERTa model neural network to get the feature vector representations of the original text,which is then summed up with the corresponding feature vector in the interpreted sequence,and passed into the multi-class classifier.The experiments select the very short text portion of the short text hierarchical categorization dataset TREC and expand the data,resulting in a dataset with an average length of 12 words.Multiple sets of comparison experiments show that the DeBERTa model proposed in this paperwith fused semantic interpretation has the best performance,and the Accuracy,F1-micro,and F1-macro values on the validation and test sets are much better than other algorithmic models,which can well cope with the task of hierarchical categorization of very short texts.
-
Prompt Learning-based Generative Approach Towards Medical Dialogue Understanding
柳俊, 阮彤, 张欢欢. 基于提示学习的生成式医疗对话理解方法[J]. 计算机科学, 2024, 51(5): 258-266.
LIU Jun, RUAN Tong, ZHANG Huanhuan. Prompt Learning-based Generative Approach Towards Medical Dialogue Understanding[J]. Computer Science, 2024, 51(5): 258-266. - LIU Jun, RUAN Tong, ZHANG Huanhuan
- Computer Science. 2024, 51 (5): 258-266. doi:10.11896/jsjkx.230300007
-
Abstract
PDF(1769KB) ( 298 )
- References | Related Articles | Metrics
-
The goal of the dialogue understanding module in task-oriented dialogue systems is to convert the user's natural language input into a structured form.However,in the diagnosis-oriented medical dialogue system,the existing approaches face the following problems:1)the granularity of the information cannot fully satisfy the needs of diagnosis,such as providing the severity of a symptom;2)it is difficult to simultaneously satisfy the diverse representations of slot values in the medical domain,such as “symptom”,which may contain non-contiguous and nested entities,and “negation”,which may contain categorical value.This paper proposes a generative medical dialogue understanding method based on prompt learning.To address problem 1),this paper replaces the single-level slot structure in the current dialogue understanding task with a multi-level slot structure to represent finer-grained information,and then proposes a generative approach based on dialogue-style prompts,which uses prompt tokens to simulate the dialogue between doctor and patient and obtain multi-level information from multiple rounds of interaction.To address problem 2),this paper proposes the use of a restricted decoding strategy in the inference process,so that the model can handle the intention detection and slot-filling tasks of extractive and categorical slots in a unified manner to avoid complex modeling.In addition,to address the problem of lacking labeled data in the medical domain,this paper proposes a two-stage training strategy to leverage the large-scale unlabeled medical dialogue corpus to improve performance.In this paper,a dataset containing 4 722 dialogues involving 17 intentions and 74 types of slots is annotated and published for the medical dialogue understanding task containing a multi-level slot structure.Experiment shows that the proposed approach can effectively parse various complex entities in medical dialogues,with 2.18% higher performance compared to existing generation methods.The two-stage training can improve the performance of the model by up to 5.23% in the scenario with little data.
-
Specific Emitter Identification Based on Hybrid Feature Selection
顾楚梅, 曹建军, 王保卫, 徐雨芯. 基于混合式特征选择的辐射源个体识别[J]. 计算机科学, 2024, 51(5): 267-276.
GU Chumei, CAO Jianjun, WANG Baowei, XU Yuxin. Specific Emitter Identification Based on Hybrid Feature Selection[J]. Computer Science, 2024, 51(5): 267-276. - GU Chumei, CAO Jianjun, WANG Baowei, XU Yuxin
- Computer Science. 2024, 51 (5): 267-276. doi:10.11896/jsjkx.230300216
-
Abstract
PDF(2474KB) ( 303 )
- References | Related Articles | Metrics
-
To improve the accuracy and computational efficiency of specific emitter identification,a specific emitter identification based on hybrid feature selection is proposed.Wrapped feature selection methods have high classification accuracy,but it has high computational complexity and low efficiency in processing high-dimensional data.Embedded feature selection methods have low computational complexity,but rely on specific classifiers.To address the above problems,combining the characteristics of wrapped and embedded feature selection methods,firstly,three embedded methods(Random Forest,XGBoost,and LightGBM) are used to initially select features for signal data,and a random forest subset,an XGBoost subset and a LightGBM subset are obtained respectively.Secondly,the wrapped methods are used to perform a second dimensionality reduction on the subset obtained after the primary selection.Sequential backward selection and an ant colony optimization algorithm are used as research strategies respectively,while LightGBM is used as the classification algorithm.A total of six feature selection models are obtained from the proposed hybrid feature selection method.The optimal hybrid feature selection model is determined by comparing the classification accuracy and the number of features in the optimal subset obtained by each model.
-
Indoor Location Algorithm in Dynamic Environment Based on Transfer Learning
王佳昊, 付一夫, 冯海男, 任昱衡. 基于迁移学习的动态环境室内定位方法研究[J]. 计算机科学, 2024, 51(5): 277-283.
WANG Jiahao, FU Yifu, FENG Hainan, REN Yuheng. Indoor Location Algorithm in Dynamic Environment Based on Transfer Learning[J]. Computer Science, 2024, 51(5): 277-283. - WANG Jiahao, FU Yifu, FENG Hainan, REN Yuheng
- Computer Science. 2024, 51 (5): 277-283. doi:10.11896/jsjkx.230300137
-
Abstract
PDF(2601KB) ( 334 )
- References | Related Articles | Metrics
-
With the development of smart home,the Wi-Fi signal-based localization technology has also been widely studied.In actual application,the training data and test data collected by indoor positioning algorithm usually do not come from the same ideal conditions.Changes in various environmental conditions and signal drift can cause different probability distributions between the training data and test data.The existing positioning algorithm cannot guarantee stable accuracy when facing these different probability distributions,resulting in dramatic reduction and infeasibility of the positioning accuracy of indoor location algorithms.Considering these difficulties,the domain adaptation technology in transfer learning is proven to be a promising solution in past researches to solve the inconsistent probability distributions problem.In this paper,a feature transferbased indoor localization algorithm TL-GLMA is proposed by combining domain adaptation learning and machine learning algorithms.TL-GLMA maps the original data of two domains to the high-dimension space through feature transfer,so as to minimize the distribution difference between the two domains in retaining the local geometric properties.In addition,because the mapped data is independent and identically distributed,TL-GLMA can use it for training the classifier to achieve better location result.Experiment results show that TL-GLMA can effectively reduce the interference caused by environmental changes and improve the location accuracy.
-
Segmental Routing in Band Telemetry Method for Endogenous Secure Switches
顾周超, 程光, 赵玉宇. 面向内生安全交换机的段路由带内遥测方法[J]. 计算机科学, 2024, 51(5): 284-292.
GU Zhouchao, CHENG Guang, ZHAO Yuyu. Segmental Routing in Band Telemetry Method for Endogenous Secure Switches[J]. Computer Science, 2024, 51(5): 284-292. - GU Zhouchao, CHENG Guang, ZHAO Yuyu
- Computer Science. 2024, 51 (5): 284-292. doi:10.11896/jsjkx.230400030
-
Abstract
PDF(3809KB) ( 294 )
- References | Related Articles | Metrics
-
In recent years,network technology has evolved rapidly,and the infrastructure and network services provided have become increasingly complex.Traditional network management and monitoring tools are facing serious challenges.Domestic and international researchers have proposed segment routing(SR) and in-band network telemetry(INT) technologies to perform more real-time and fine-grained network measurements.However,in-band network telemetry technologies still have many challenges in practical use,such as flexible deployment,dynamic deployment,and efficient deployment in the rapidly growing network environment.First,the traditional INT technology lacks a suitable carrier,and the packet overhead increases linearly with the telemetry path length,which leads to the performance bottleneck problem of telemetry monitoring.For the problem of high bit overhead and difficulties in efficient deployment of traditional in-band network telemetry systems,this paper proposes an SRv6_Based in-band network telemetry approach(SRv6_Based INT).In this work,the overhead of INT and SR is reduced and the two are seamlessly combined to achieve a lightweight and adaptive telemetry approach.In this work,the metadata of INT is designed so that its length is equal to the Segment field in SRv6,and then the corresponding SID is modified to the corresponding INT metadata in each hop according to the flow table issued by the monitoring server.This method fully combines the advantages of both techniques and keeps the overhead within a reasonable range,which is better than the traditional in-band network telemetry methods.
-
COURIER:Edge Computing Task Scheduling and Offloading Method Based on Non-preemptivePriorities Queuing and Prioritized Experience Replay DRL
杨秀文, 崔允贺, 钱清, 郭春, 申国伟. COURIER:基于非抢占式优先排队和优先经验重放DRL的边缘计算任务调度与卸载方法[J]. 计算机科学, 2024, 51(5): 293-305.
YANG Xiuwen, CUI Yunhe, QIAN Qing, GUO Chun, SHEN Guowei. COURIER:Edge Computing Task Scheduling and Offloading Method Based on Non-preemptivePriorities Queuing and Prioritized Experience Replay DRL[J]. Computer Science, 2024, 51(5): 293-305. - YANG Xiuwen, CUI Yunhe, QIAN Qing, GUO Chun, SHEN Guowei
- Computer Science. 2024, 51 (5): 293-305. doi:10.11896/jsjkx.230200121
-
Abstract
PDF(4567KB) ( 300 )
- References | Related Articles | Metrics
-
Edge computing(EC) deploy a large number of computing and storage resources at the edge of the network to meet requirements on latency and power consumption of tasks.Computing offloading is one of the key technologies in EC.When estimating the delay of task queuing,the existing computation offloading methods usually use M/M/1/∞/∞/FCFS or M/M/n/∞/∞/FCFS models.Without considering the priority of high delay sensitive tasks,these methods cause some computation tasks that do not require sensitive delay always occupy the computation resources,increasing the delay cost of these methods.Meanwhile,most of the existing playback methods use random sampling to replay experience,which cannot distinguish the pros and cons of expe-rience,resulting in low experience utilization and slow neural network convergence.At last,the deterministic policy deep reinforcement learning(DRL) based on computational offloading methods have problems,such as weak ability of exploring environment,low robustness and low experience utilization rate,which reduces the accuracy of solving computational unload problem.To solve the above problems,considering the multi-task mobile device and multi-edge server computing offload scenarios,aims to minimize the system delay and energy consumption,study task scheduling and offloading decision-making problems,and computation offloading qUeuing and pRioritIzed experience replay DRL(COURIER) is proposed.COURIER first designs a non-preemptive priority queuing model(M/M/n/∞/∞/NPR) to optimize the queuing delay of tasks.Then,it proposes a maximum entropy deep reinforcement learning algorithm based on prioritized experience replay.For the offloading decision problem,an offloading decision mechanism of priority experience replay SAC is proposed,based on soft actor-critic(SAC) algorithm.In this mechanism,information entropy is added to the objective function to make the agent adopt random strategy,and the empirical sampling me-thod is optimized to accelerate the convergence rate of the network.Simulation results show that COURIER can effectively reduce system delay and energy consumption.
-
Radar Active Jamming Recognition Based on Multiscale Fully Convolutional Neural Network and GRU
洪梯境, 刘登峰, 刘以安. 基于多尺度FCN和GRU的雷达有源干扰识别[J]. 计算机科学, 2024, 51(5): 306-312.
HONG Tijing, LIU Dengfeng, LIU Yian. Radar Active Jamming Recognition Based on Multiscale Fully Convolutional Neural Network and GRU[J]. Computer Science, 2024, 51(5): 306-312. - HONG Tijing, LIU Dengfeng, LIU Yian
- Computer Science. 2024, 51 (5): 306-312. doi:10.11896/jsjkx.230300062
-
Abstract
PDF(1978KB) ( 332 )
- References | Related Articles | Metrics
-
Radar plays a vital role in modern electronic warfare,and as the contest between electronic countermeasures and electronic resistance intensifies,the problem of difficult manual extraction of features for active radar interference and low recognition rates under low JNR in complex electromagnetic environments needs to be addressed urgently.This paper proposes an interfe-rence recognition algorithm based on the parallelization of multiscale and fully convolutional neural network(MFCN) and gated recurrent unit(GRU) to solve the above problem.This is an end-to-end deep neural network model,which does not require complex pre-processing of the data,and the original time domain sequence of the interference signal is input to classify and identify the interference signal under different JNR.Simulation results show that the recognition accuracy of the network gradually increases as the JNR gradually increases;the overall recognition rate of the network is 99.4% in the full JNR range of -10 to 10 dB,and the recognition accuracy is close to 100% when the JNR is above -6 dB,which has a higher recognition accuracy compared with the simple multiscale and fully convolutional neural network,gated recurrent unit and other classical models,and the limit of the adaptive JNR is lower.
-
Convolutional Neural Network Model Compression Method Based on Cloud Edge Collaborative Subclass Distillation
孙婧, 王晓霞. 基于云边协同子类蒸馏的卷积神经网络模型压缩方法[J]. 计算机科学, 2024, 51(5): 313-320.
SUN Jing, WANG Xiaoxia. Convolutional Neural Network Model Compression Method Based on Cloud Edge Collaborative Subclass Distillation[J]. Computer Science, 2024, 51(5): 313-320. - SUN Jing, WANG Xiaoxia
- Computer Science. 2024, 51 (5): 313-320. doi:10.11896/jsjkx.240100038
-
Abstract
PDF(2376KB) ( 290 )
- References | Related Articles | Metrics
-
In the current training and distribution process of convolutional neural network models,the cloud has sufficient computing resources and datasets,but it is difficult to cope with the demand for fragmentation in edge scenes.The edge side can directly train and infer models,but it is difficult to directly use the convolutional neural network models trained in the cloud according to unified rules.To address the issue of low training and inference effectiveness of convolutional neural network algorithms for model compression in the context of limited resources on the edge side,a model distribution and training framework based on cloud edge collaboration is firstly proposed.This framework can combine the advantages of both cloud and edge sides for model retraining,meeting the edge's requirements for specified recognition targets,specified hardware resources,and specified accuracy.Secondly,based on the training approach of the cloud edge collaborative framework,new subclass knowledge distillation methods based on logits and channels (SLKD and SCKD) are proposed to improve knowledge distillation technology.The cloud server first provides a model with multi-target recognition,and then through the subclass knowledge distillation method,the model is retrained on the edge side into a lightweight model that can be deployed in resource limited scenarios.Finally,the effectiveness of the joint training framework and the two subcategory distillation algorithm are validated on the CIFAR-10 dataset.The experimental results show that at a compression ratio of 50%,the inference accuracy is improved by 10% to 11% compared to models with full classification.Compared to the retraining of the model,the accuracy of the model trained through knowledge distillation method has also been greatly improved,and the higher the compression ratio,the more significant the improvement in model accuracy.
-
Adaptive Image Steganography Against JPEG Compression
张静涵, 陈文. 抗JPEG压缩的自适应图像隐写算法[J]. 计算机科学, 2024, 51(5): 321-330.
ZHANG Jinghan, CHEN Wen. Adaptive Image Steganography Against JPEG Compression[J]. Computer Science, 2024, 51(5): 321-330. - ZHANG Jinghan, CHEN Wen
- Computer Science. 2024, 51 (5): 321-330. doi:10.11896/jsjkx.231000036
-
Abstract
PDF(3310KB) ( 340 )
- References | Related Articles | Metrics
-
Existing network communication systems often employ JPEG compression to reduce communication overhead when transmitting images.However,traditional image steganography techniques lack the ability to withstand JPEG compression.After performing lossy compression on secret images,the secret data is easily destroyed and cannot be extracted correctly.Therefore,designing secure and robust image steganography technology has important practical application value.This paper proposes an adaptive image robust steganography algorithm that is resistant to JPEG compression.Firstly,this paper analyzes the information loss caused by JPEG compression on the carrier image and identify stable texture features that can serve as robust embedding domains for secret information.Based on this,this paper present an adaptive secret information quantization embedding method that adjusts based on image block texture features,embedding the secret information into the texture mean features of image blocks with strong resistance to compression.Finally,the error feedback mechanism is used to adjust the generated confidential image until the secret information extraction error rate reaches the expected value,and a robust stego image resistant to JPEG compression is generated.Comparison experiments on BossBase1.01 demonstrate that the proposed method exhibits robust compression,resistance to detection, and maintains high image quality even after JPEG compression.In comparison to traditional least significant bit(LSB) steganography,Catalan transform-based image steganography(Catalan based Steganography),fixed neural network steganography(FNNS),and robust steganography using autoencoder latent space(RoSteALS),the proposed algorithm effectively reduces the average error rates of information extraction by 49.79%,49.73%,37.38%,and 38.85% respectively,while maintaining superior visual quality compared to Catalan based Steganography,FNNS,and RoSteALS.Additionally,our algorithm demonstrates robustness against StegExpose steganalysis.
-
Verifiable Decryption Scheme Based on MLWE and MSIS
郭春彤, 吴文渊. 基于MLWE和MSIS的可验证解密方案[J]. 计算机科学, 2024, 51(5): 331-345.
GUO Chuntong, WU Wenyuan. Verifiable Decryption Scheme Based on MLWE and MSIS[J]. Computer Science, 2024, 51(5): 331-345. - GUO Chuntong, WU Wenyuan
- Computer Science. 2024, 51 (5): 331-345. doi:10.11896/jsjkx.230300127
-
Abstract
PDF(1769KB) ( 316 )
- References | Related Articles | Metrics
-
The verifiable decryption technology involved in the two-party secure computing can be applied in real-world scenarios that require privacy protection,such as medical research data sharing,and inter-institutional cooperation for model training,which can help further break down the data isolation problem and ensure data security.However,the existing zero-knowledge proofs constructed for correct decryption based on lattice cryptography or other post-quantum encryption schemes are less efficient.Facing this situation,this paper proposes a verifiable decryption scheme based on the module learning with errors(MLWE) and module short integer solution(MSIS) for Kyber.Firstly,based on the encryption and decryption properties of Kyber,there is a difference in constructing the equivalence relation using the data held by the prover and the verifier.The scheme proposes a me-thod that uses error estimation combined with Kyber compression function to enable the prover to provide the verifier with some information about his own data to eliminate the difference,and then provides an equivalence relation that can be used for verification,and combines this relation with the framework of the Dilithium signature scheme without public key compression version to construct a non-interactive zero-knowledge proof,which transforms the verifiable decryption problem into proving a linear relation satisfied by short vectors in the ring.Secondly,the correctness,security,communication overhead and computational complexity of the scheme are theoretically analyzed,the soundness and zero-knowledge of the scheme are reduced to the hardness assumptions of MSIS,and two groups of suggested parameters with different security levels are provided.Finally,the correctness and efficiency of this scheme are tested by writing a C program.Experimental results are consistent with the theoretical analysis results,and compared with the existing schemes,the scheme in this paper has significant advantages in terms of proof size and proof time for a single ciphertext,which is more concise and efficient.
-
Study on Artificial Immune Detector Generation Algorithm Based on Label Influence Propagation
周遵龙, 陈文, 马欣蕾. 基于标签影响力传播的人工免疫检测器生成算法研究[J]. 计算机科学, 2024, 51(5): 346-354.
ZHOU Zunlong, CHEN Wen, MA Xinlei. Study on Artificial Immune Detector Generation Algorithm Based on Label Influence Propagation[J]. Computer Science, 2024, 51(5): 346-354. - ZHOU Zunlong, CHEN Wen, MA Xinlei
- Computer Science. 2024, 51 (5): 346-354. doi:10.11896/jsjkx.231000027
-
Abstract
PDF(2459KB) ( 267 )
- References | Related Articles | Metrics
-
Artificial immune systems utilize training samples to screen and train candidate detectors,so as to generate mature detectors covering non-self regions for self and non-self differentiation.The traditional negative selection algorithm(NSA) based detector generation algorithm usually requires a large number of labeled self training samples,while the limited number of labeled samples in practical applications leads to insufficient detector training,which restricts the detection accuracy of detectors.To address this problem,this paper proposes an immune detector training method based on label influence propagation,where label influence propagation is performed by a small number of labeled cluster members among samples belonging to the same cluster,and pseudo-labeling is performed for the unlabeled samples in the cluster.Subsequently,this paper removes low-confidence newly labeled samples based on noise-learning-based pseudo-labeling assessment.The newly labeled samples that passed the labeling assessment are added to the training sample set to extend the labeled sample size and improve the training quality of the immune detector.Comparative experimental results on seven types of UCI public datasets of different dimensions and sizes show that the proposed label influence propagation-based immune detection training algorithm is able to effectively improve the training performance of the detector,especially in the case of limited training samples or unbalanced datasets,the detector's performance is significantly better than the traditional methods.Compared with the detection generation algorithms such as PSA,co-PSA,GFNSA,etc,the recognition accuracy of the detector is improved by 10% on average.
-
Study on Binary Code Similarity Detection Based on Jump-SBERT
严尹彤, 于璐, 王泰彦, 李宇薇, 潘祖烈. 基于Jump-SBERT的二进制代码相似性检测技术研究[J]. 计算机科学, 2024, 51(5): 355-362.
YAN Yintong, YU Lu, WANG Taiyan, LI Yuwei, PAN Zulie. Study on Binary Code Similarity Detection Based on Jump-SBERT[J]. Computer Science, 2024, 51(5): 355-362. - YAN Yintong, YU Lu, WANG Taiyan, LI Yuwei, PAN Zulie
- Computer Science. 2024, 51 (5): 355-362. doi:10.11896/jsjkx.230400011
-
Abstract
PDF(3011KB) ( 288 )
- References | Related Articles | Metrics
-
Binary code similarity detection technology plays an important role in different security fields.Aiming at the problems of the existing binary code similarity detection methods,such as high computational cost and low accuracy,incomplete semantic information recognition of binary function and single evaluation data set,a binary code similarity detection technique based on Jump-SBERT is proposed.Jump-SBERT has two main innovations.One is to use twin networks to build SBERT network structure,which can reduce the calculation cost of the model while keeping the calculation accuracy unchanged.The other is to introduce jump recognition mechanism,which enables Jump-SBERT to learn the graph structure information of binary functions.Thus,the semantic information of binary function can be captured more comprehensively.Experimental results show that the re-cognition accuracy of Jump-SBERT can reach 96.3% in the small function pool(32 functions) and 85.1% in the large function pool(10 000 functions),which is 36.13% higher than state-of-the-art(SOTA) methods.Jump-SBERT is more stable in large-scale binary code similarity detection.Ablation experiments show that both of the two main innovation points have positive effects on Jump-SBERT,and the contribution of jump recognition mechanism is up to 9.11%.
-
Robust Anomaly Detection Based on Adversarial Samples and AutoEncoder
李沙沙, 邢红杰. 基于对抗样本和自编码器的鲁棒异常检测[J]. 计算机科学, 2024, 51(5): 363-373.
LI Shasha, XING Hongjie. Robust Anomaly Detection Based on Adversarial Samples and AutoEncoder[J]. Computer Science, 2024, 51(5): 363-373. - LI Shasha, XING Hongjie
- Computer Science. 2024, 51 (5): 363-373. doi:10.11896/jsjkx.230300153
-
Abstract
PDF(2490KB) ( 295 )
- References | Related Articles | Metrics
-
The anomaly detection method based on AutoEncoder only uses normal samples for training,so it can effectively reconstruct normal samples,but cannot reconstruct abnormal samples.In addition,when the anomaly detection method based on AutoEncoder is attacked by adversarial attacks,it often obtains wrong detection results.In order to solve the above problems,robust anomaly detection based on adversarial samples and AutoEncoder(RAD-ASAE) method is proposed.RAD-ASAE consists of two parameter-shared encoders and a decoder.First,the normal sample is perturbed slightly to generate the adversarial sample,and normal samples and adversarial samples are used to train the model at the same time to improve the adversarial robustness of the model.Second,the reconstruction error of the adversarial samples is minimized in the sample space,and the mean square error between the normal samples and the reconstructed samples of the adversarial samples is minimized.At the same time,the mean square error between the latent features of the normal samples and the adversarial samples is minimized in the latent space to improve the reconstruction ability of the AutoEncoder.Experimental results on MNIST,Fashion-MNIST and CIFAR-10 show that RAD-ASAE demonstrates better detection performance in comparison with 7 related methods.
-
Robust and Multilayer Excel Document Watermarking for Source Tracing
韩松源, 王宏霞, 蒋子渝. 面向流动追踪的多层鲁棒Excel文档水印[J]. 计算机科学, 2024, 51(5): 374-381.
HAN Songyuan, WANG Hongxia, JIANG Ziyu. Robust and Multilayer Excel Document Watermarking for Source Tracing[J]. Computer Science, 2024, 51(5): 374-381. - HAN Songyuan, WANG Hongxia, JIANG Ziyu
- Computer Science. 2024, 51 (5): 374-381. doi:10.11896/jsjkx.230300192
-
Abstract
PDF(3664KB) ( 262 )
- References | Related Articles | Metrics
-
Excel documents are widely used in finance,scientific research,data analysis and statistical reporting,and play an increasingly important role in education and training,online offices and many other scenarios,but they also pose security risks such as unauthorised use,infringement and information leakage.To protect the security of digital content of Excel documents,there is a vital need to develop more secure and reliable document watermarking algorithms.This paper proposes a source tracing multilayer watermarking algorithm for Excel documents with good invisibility and robustness based on the Excel document format.By embedding multilayer watermark information into the cell style and RGB color values of the border of Excel documents,it can clarify the document distribution chain in practical application scenarios,trace the source of document leakage and locate the person responsible for the leakage.The proposed algorithm can be used to reduce the occurrence of information leakage.Experimental comparisons show that the proposed method is imperceptible to watermarks,robust to a wide range of common attacks,and supports multilayer watermark embedding of up to five layers.Compared with other document format-based watermarking algorithms,the proposed algorithm has better watermark invisibility,stronger robustness and a wider range of applications.
-
Multi-attribute Blockchain Decentralization Degree Measurement Model
张睿蓉, 牛保宁, 樊星. 面向多重属性的区块链去中心化程度度量模型[J]. 计算机科学, 2024, 51(5): 382-389.
ZHANG Ruirong, NIU Baoning, FAN Xing. Multi-attribute Blockchain Decentralization Degree Measurement Model[J]. Computer Science, 2024, 51(5): 382-389. - ZHANG Ruirong, NIU Baoning, FAN Xing
- Computer Science. 2024, 51 (5): 382-389. doi:10.11896/jsjkx.230300076
-
Abstract
PDF(2620KB) ( 301 )
- References | Related Articles | Metrics
-
Decentralization is a typical feature of blockchain.With the continuous development of blockchain technology,the practical significance of quantitative measurement of decentralization degree is gradually deepening.The evaluation indicators used in the existing blockchain decentralization degree measurement methods either consider the node function or the network perfor-mance,and the different evaluation angles lead to a large deviation between the proposed model and the evaluation indicators.To this end,this paper focuses on the blockchain transaction process,extracts the key factors that affect the degree of decentralization of the blockchain from three aspects:node function integrity,network transmission and data storage,establishes a multi-attribute decentralized degree measurement model(MDDMM) and implements the prototype.Experimental results show that the degree of influence of data storage,node function integrity and network transmission on decentralization characteristics decreases successively.The degree of decentralization of Bitcoin is about 3.5%,30% and 38% higher than that of Bitcash,Ethereum and Ethereum Classic respectively.Compared with the existing measurement methods,the MDDMM proposed in this paper has more comprehensive measurement indicators,providing theoretical basis and data support for quantifying the decentralized degree of the blockchain platform.
-
Three-dimensional OFDM Constellation Encryption Scheme Based on Perturbed Spatiotemporal Chaos
赵耿, 吴锐, 马英杰, 黄思婕, 董有恒. 基于扰动时空混沌的三维OFDM星座加密方案[J]. 计算机科学, 2024, 51(5): 390-399.
ZHAO Geng, WU Rui, MA Yingjie, HUANG Sijie, DONG Youheng. Three-dimensional OFDM Constellation Encryption Scheme Based on Perturbed Spatiotemporal Chaos[J]. Computer Science, 2024, 51(5): 390-399. - ZHAO Geng, WU Rui, MA Yingjie, HUANG Sijie, DONG Youheng
- Computer Science. 2024, 51 (5): 390-399. doi:10.11896/jsjkx.230200169
-
Abstract
PDF(5762KB) ( 289 )
- References | Related Articles | Metrics
-
The swift evolution of wireless communication networks has resulted in a substantial surge in transmitted information,thus imposing greater demands on system transmission efficiency and communication security.In order to fulfill these requirements,this paper proposes a 3D OFDM constellation encryption scheme based on perturbed spatiotemporal chaos.Initially,a feedback elementary cellular automaton is devised to enhance the elementary cellular automaton's periodicity and pseudo-randomness.The iterative outcomes of the feedback elementary cellular automaton are normalized and integrated into the spatiotemporal chaos system as a perturbation.It is verified that the system,after incorporating the perturbation,exhibits excellent chaotic pro-perties through bifurcation diagrams,Lyapunov exponents,regression mapping analysis,and randomness tests.Next,a novel three-dimensional 16-ary constellation map is developed to expand the minimum Euclidean distance between constellation points by 6.3%,and the perturbed spatiotemporal chaotic system is combined with the physical layer modulation encryption of the three-dimensional constellation rotation.Experimental simulation outcomes indicate that the proposed algorithm's bit error rate performance is enhanced by roughly 1 dB compared to other 3D 16-ary constellation mapping rotation encryption algorithms.Moreover,security analyses of key space,key sensitivity,and statistical attack prove the superior security performance of the proposed scheme.
-
Correctness Verifiable Outsourcing Computation Scheme of Shortest Path Querying over Encrypted Graph Data
丁红发, 于莹莹, 蒋合领. 正确性可验证的密文图数据最短路径外包计算方案[J]. 计算机科学, 2024, 51(5): 400-413.
DING Hongfa, YU Yingying, JIANG Heling. Correctness Verifiable Outsourcing Computation Scheme of Shortest Path Querying over Encrypted Graph Data[J]. Computer Science, 2024, 51(5): 400-413. - DING Hongfa, YU Yingying, JIANG Heling
- Computer Science. 2024, 51 (5): 400-413. doi:10.11896/jsjkx.230200031
-
Abstract
PDF(3230KB) ( 277 )
- References | Related Articles | Metrics
-
Mass graph data such as geolocation and social networks are widely used and contain massive privacy,usually,it requires varied query services through secure outsourcing computation schemes.However,it is still an open challenge to design correctness verifiable outsourcing computation protocols for graph data.To this end,a correctness-verifiable outsourcing computation scheme of accurate shortest path over encrypted graph data is proposed.In this scheme,a breadth-first shortest path algorithm for encrypted graph data is constructed by using additive homomorphic encryption to support the outsourcing computation of exact shortest distance queries over encrypted graph data.Secondly,a probabilistic correctness verification mechanism of shortest path outsourcing computation result is constructed by using the bilinear mapping accumulator.Analysis and proof show that this scheme implements the correctness-verifiable accurate shortest path outsourcing computation with probabilistic reliability,it has the security of IND-CCA2 under the random oracle model.Comparisons and experiments indicate that the proposed scheme has significant advantages compared with other related schemes in terms of security and functionality,compared with existing verifiable graph data outsourcing computation schemes,the overheads in the initialization and encryption phase,the query phase,and the verification and decryption phase reduces by 0.15%~23.19%,12.91%~30.89% and 1.13%~18.62%,respectively.
-
Key Generation Scheme Based on RIS Multipath Random Superposition
张仲鑫, 易鸣, 肖帅芳. 基于RIS多径随机叠加的物理层密钥生成方案[J]. 计算机科学, 2024, 51(5): 414-420.
ZHANG Zhongxin, YI Ming, XIAO Shuaifang. Key Generation Scheme Based on RIS Multipath Random Superposition[J]. Computer Science, 2024, 51(5): 414-420. - ZHANG Zhongxin, YI Ming, XIAO Shuaifang
- Computer Science. 2024, 51 (5): 414-420. doi:10.11896/jsjkx.221000037
-
Abstract
PDF(2496KB) ( 301 )
- References | Related Articles | Metrics
-
In static scenario,the multipath phase and amplitude changes slowly,and the key rate at physical layer based on wireless channel characteristics is low,making it difficult to meet the communication security requirements of high-speed data ser-vices.This paper proposes a key generation scheme based on random superposition of multipaths with the help of RIS.Firstly,both the sender and the receiver use the real-time controllability of the RIS and the agile characteristics of the pattern to separate the multipath in the space through the multipath power spectrum characteristics.Then,the power allocation strategy of multipath is changed many times in the channel coherent time,an equivalent fast-changing channel is constructed,and the randomness of the key source is improved.Finally,a consistent security key is generated through quantization,negotiation,and privacy amplification.Simulation results show that,compared with the existing physical layer key generation schemes based on RIS antenna,the proposed scheme improves the source entropy by exploiting the multipath characteristics of the channel.On the other hand,the randomness of the source is improved by the power distribution on the multipath,which brings a certain gain to the key rate.