Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 51 Issue 1, 15 January 2024
  
Special Issue on the 50th Anniversary of Computer Science
Cross-domain Data Management
DU Xiaoyong, LI Tong, LU Wei, FAN Ju, ZHANG Feng, CHAI Yunpeng
Computer Science. 2024, 51 (1): 4-12.  doi:10.11896/jsjkx.yg20240102
Abstract PDF(2020KB) ( 1735 )   
References | Related Articles | Metrics
As data becomes a new production factor and the digital China is promoted as a top-level strategy,cross-domain data sharing and circulation play a crucial role in maximizing the value of data factors.The country has taken a series of measures such as completing the overall layout design of the national integrated data center system and launching the “East-West Computing” project,providing infrastructure for the cross-domain application of data factors.Cross-domain data management faces challenges in communication,data modeling,and data access.This paper explores the connotation,research challenges,and key technologies of cross-domain data management from three perspectives:cross-spatial domain,cross-administrative domain,and cross-trust domain,and discusses its future development trends.
Special Issue on the 51th Anniversary of Computer Science
Survey on Cross-modality Object Re-identification Research
CUI Zhenyu, ZHOU Jiahuan, PENG Yuxin
Computer Science. 2024, 51 (1): 13-25.  doi:10.11896/jsjkx.yg20240103
Abstract PDF(3622KB) ( 1709 )   
References | Related Articles | Metrics
Object re-identification(ReID) technology aims to match the same object captured by cameras across different areas at different time.The key is to distinguish different objects through fine-grained differences between different individuals,which is widely used in security control,criminal investigation and monitoring,etc.Traditional ReID technology is usually suitable for visible cameras with good lighting conditions,but its performance is severely limited under low-light conditions.The infrared camera is often used to collect infrared images of objects under low light conditions due to its outstanding night vision performance.Therefore,cross-modality object re-identification technology focuses on achieving uninterrupted object ReID across day and night from visible images to infrared images(VI-ReID),and vice versa.In recent years,VI-ReID technology has made significant progress.However,a comprehensive summary and in-depth analysis of existing models are still lacking.To this end,this paper conducts an in-depth investigation and summary of relevant research and novel methods in the field of VI-ReID.It discusses the challenges faced by existing methods in actual scenarios,and categorizes them from two aspects:model classification and model evaluation.First,focusing on the research challenges,VI-ReID is categorized into generative methods and non-generative methods.Se-condly,the evaluation datasets and evaluation metrics are reviewed and summarized.Finally,the remaining challenges in VI-ReID are discussed and the future development trends are prospected.
Special Issue on the 52th Anniversary of Computer Science
Exploring the Scientific Nature and Scientific Questions of Data Science
CHAO Lemen
Computer Science. 2024, 51 (1): 26-34.  doi:10.11896/jsjkx.231100121
Abstract PDF(1820KB) ( 1589 )   
References | Related Articles | Metrics
As an emerging academic field,the scientific nature of data science has garnered attention,and its scientific questions have not been clearly defined.This paper explores the scientific nature of data science from four aspects:scientific research paradigms and methodologies,falsifiability and reproducibility,scientific spirit and rapid iteration,and scientific research agenda and theoretical framework.It also answers the question of why data science is an emerging science.Building upon this foundation and incorporating concepts such as the DIKW model(data-information-knowledge-wisdom pyramid or hierarchy),the DMP model(data-model-problem model),the statistical and machine learning methodologies of data science,and the processes and activities in data science.This paper presents seven core scientific questions in data science:the precedence of explanation or data,problem alignment with data or data alignment with problems,prioritizing trust in data or models,emphasizing performance or interpre-tability,data partitioning strategies,solving unknown data problems with known data,and the role of humans within or outside the loop.Finally,four recommendations for data science research are proposed:a focus on theoretical research within data science itself,the further separation and specialization of data science in terms of science,technology,and engineering,strengthening the theory and practice of data science empowered by artificial intelligence,and fostering collaboration between the discipline of data science and data science within other disciplines.
Special Issue on the 53th Anniversary of Computer Science
Filter Data Structures:A Survey
WANG Hancheng, DAI Haipeng, CHEN Shusen, CHEN Zhipeng, CHEN Guihai
Computer Science. 2024, 51 (1): 35-40.  doi:10.11896/jsjkx.231000193
Abstract PDF(1758KB) ( 1514 )   
References | Related Articles | Metrics
Filter data structures can approximately determine whether an element exists in a given set.Typical filter data structures,such as Bloom filters,cuckoo filters,and quotient filters,sacrifice query accuracy for lower memory space consumption and lower query time overhead.Due to their spatial and temporal efficiency,filter data structures are now widely used in approximate membership query operations in computer networks,the Internet of Things,database systems,file systems,bioinformatics,machine learning,and other fields.Since the 1970s,filters have been extensively studied.Their research ideas are constantly changing.This paper compiles the classic studies on filter data structures in the past fifty years,summarizes existing studies based on the mechanism of filter data structures and analyze the relationship between different studies.Finally,future research directions in filter data structures are discussed.
Special Issue on the 54th Anniversary of Computer Science
Survey of Learning-based Filters
LI Meng, DAI Haipeng, SUI Yongxi, GU Rong, CHEN Guihai
Computer Science. 2024, 51 (1): 41-49.  doi:10.11896/jsjkx.231000202
Abstract PDF(2225KB) ( 1450 )   
References | Related Articles | Metrics
As a space-efficient probability structure,filters can efficiently solve approximate set membership queries.In recent years,with the development of machine learning technology,some learning-based filters have exceeded traditional filters in performance.These learning-based filters propose to consider data distribution information and treat set membership queries as a binary classification problem,achieving superior performance compared to traditional filters.Inspired by this,the research field of learning-based filters has progressed rapidly,and several variants have emerged.However,there is still a lack of a systematic review and comparison of recent related work.In order to fill this gap,this paper comprehensively reviews recent related work on learning-based filters,analyzes their structure design and theoretical analysis,and predicts the future development direction.
Special Issue on the 55th Anniversary of Computer Science
Study on Model Migration of Natural Language Processing for Domestic Deep Learning Platform
GE Huibin, WANG Dexin, ZHENG Tao, ZHANG Ting, XIONG Deyi
Computer Science. 2024, 51 (1): 50-59.  doi:10.11896/jsjkx.230600051
Abstract PDF(2442KB) ( 1564 )   
References | Related Articles | Metrics
Deep learning platformplays an essential role in the development of the new generation of artificial intelligence.In recent years,the domestic artificial intelligence high-performance software and hardware system of China represented by the Ascend platform has developed rapidly,which opens up a new way for the deep learning platform in China.At the same time,in order to explore and solve the potential loopholes in the Ascend system,the platform developers of Ascend actively carries out the migration of commonly used deep learning models with researchers.These efforts are further promoted from the perspective of natural language processingaiming at how to refine the domestic deep learning platform.Four natural language processing tasks arehighlighted,neural machine translation,machine reading comprehension,sequence labeling and text classification,along with four classical neural models,Albert,RNNSearch,BERT-CRF and TextING.They are migrated on the Ascend platform in details.Based on the above model migration research,this paper integrates the deficiencies of the architecture design of the Ascend platform in the research and business in natural language processing.In conclusion,these deficiencies are sorted out as four essential aspects:1)the lack of the dynamic space allocation characteristics of computing graph nodes;2)incompatibility for the sinking of resource operators on the acceleration-deviceside;3)the fusion of graphics and computing which is not flexible to handle unseen model structures,and 4)the defects of the mixed-precision training strategy.To overcome these problems,this paper puts forward the avoidance methods or solutions.Finally,constructive suggestions are provided for,including but not limited to,the deep-learning platforms in China.
Special Issue on the 56th Anniversary of Computer Science
Survey of Unsupervised Sentence Alignment
GU Shiwei, LIU Jing, LI Bingchun, XIONG Deyi
Computer Science. 2024, 51 (1): 60-67.  doi:10.11896/jsjkx.231100024
Abstract PDF(1800KB) ( 1449 )   
References | Related Articles | Metrics
Unsupervised sentence alignment is an important and challenging problem in the field of natural language processing.This task aims to find corresponding sentence correspondences in different languages and provide basic support for cross-language information retrieval,machine translation and other applications.This survey summarizes the current research status of unsupervised sentence alignment from three aspects:methods,challenges and applications.In terms of methods,unsupervised sentence alignment covers a variety of methods,including based on multi-language embedding,clustering and self-supervised or generative models.However,unsupervised sentence alignment faces challenges such as diversity,language differences,and domain adaptation.The ambiguity and diversity of languages complicates sentence alignment,especially in low-resource languages.Despite the challenges,unsupervised sentence alignment has important applications in fields such as cross-lingual information retrieval,machine translation,and multilingual information aggregation.Through unsupervised sentence alignment,information in different languages can be integrated to improve the effect of information retrieval.At the same time,research in this field is alsoconstan-tly promoting technological innovation and development,providing opportunities to achieve more accurate and robust unsupervised sentence alignment.
Special Issue on the 57th Anniversary of Computer Science
Security of Large Language Models:Current Status and Challenges
ZHAO Yue, HE Jinwen, ZHU Shenchen, LI Congyi, ZHANG Yingjie, CHEN Kai
Computer Science. 2024, 51 (1): 68-71.  doi:10.11896/jsjkx.231100066
Abstract PDF(2277KB) ( 1718 )   
References | Related Articles | Metrics
Large language models have revolutionized natural language processing,offering exceptional text understanding and generation capabilities that benefit society significantly.However,they also pose notable security challenges,demanding the attention of security researchers.This paper introduces these concerns,including malicious applications with prompt injection attacks,reliable issues arising from model hallucinations,privacy risks tied to data protection,and the problem of prompt leakage.To enhance model security,a comprehensive approach is required,focusing on privacy preservation,interpretability research,and model distribution stability and robustness.
Special Issue on the 58th Anniversary of Computer Science
Review of Unsupervised Domain Adaptive Person Re-identification Based on Pseudo-labels
JING Yeyiran, YU Zeng, SHI Yunxiao, LI Tianrui
Computer Science. 2024, 51 (1): 72-83.  doi:10.11896/jsjkx.230700101
Abstract PDF(2884KB) ( 1502 )   
References | Related Articles | Metrics
Person re-identification is one of the hot research topics in the field of computer vision.In recent years,in order to solve the problem of scarcity of label data in the practical application of person re-identification,and to effectively use the existing label data,researchers have proposed domain adaptive methods based on generative adversarial networks and pseudo-labels to carry out cross-domain person re-identification research.The unsupervised domain adaptive person re-identification method based on pseudo-labels is favored by researchers due to its remarkable effect.This paper sorts out the work of pseudo-label-based adaptive person re-identification in the unsupervised field in the past 7 years,and divides the pseudo-label-based method into two stages from the perspective of model training:1)Pseudo-label generation stage.Most of the pseudo-label generation methods in existing works use clustering methods,and some works use graph matching based on graph structure learning and graph neural network methods to generate pseudo-labels in the target domain.2)Pseudo-label refining stage.In this paper,the existing pseudo-label refinement methods are summarized into the refinement method based on representation learning and the refinement method based on similarity learning,and the model methods are summarized and organized respectively.Finally,the current challenges of pseudo-label-based adaptive person re-identification in the unsupervised domain are discussed and the possible future development directions are prospected.
Special Issue on the 59th Anniversary of Computer Science
Research Progress on Colonel Blotto Game Models and Solving Methods
LUO Junren, ZOU Mingwo, CHEN Shaofei, ZHANG Wanpeng, CHEN Jing
Computer Science. 2024, 51 (1): 84-98.  doi:10.11896/jsjkx.230600011
Abstract PDF(3592KB) ( 1435 )   
References | Related Articles | Metrics
Resource allocation under confrontation conditions is the core of most game decision problems.From fitting optimal solution to game equilibrium solution,resource allocation strategy solving based on game theory is a frontier topic in cognitive decision-making field.This paper summarizes and analyzes the Colonel Blotto game model and its solution method for adversarial resource allocation.Firstly,the differences between offline and online strategy learning,strategy game and related solution concepts,online optimization and regret value are briefly introduced.Secondly,six types of Colonel Blotto game models(continuous Blotto game,discrete Colonel Lotto game,generalized Colonel Blotto game,generalized Lotto Blotto game,generalized rule Colonel Lotto game and online discrete Colonel Lotto game).Then,this paper distinguishes 2 stages(offline and online) and 3 types of game scenarios(single,repeated,multi-stage),and analyzes the solution method of Colonel Blotto game.Finally,the future research frontiers are analyzed and prospected from four aspects:typical application exploration,generalized game model,game solving method and future research prospect.The main purpose is to give an overview of the current Colonel Blotto game,hoping to enlighten the research on resource allocation and game theory under confrontation condition.
Database & Big Data & Data Science
Survey of Inferring Information Diffusion Networks
WANG Yuchen, GAO Chao, WANG Zhen
Computer Science. 2024, 51 (1): 99-112.  doi:10.11896/jsjkx.230500127
Abstract PDF(2237KB) ( 1885 )   
References | Related Articles | Metrics
Information diffusion can be modeled as a stochastic process over a network.However,the topology of an underlying diffusion network and the pathways of spread are often not visible in real-world scenarios.Therefore,the inference of diffusion networks becomes critical in the analysis and understanding of the diffusion process,tracking the pathways of spread,and even predicting future contagion events.There has been a surge of interest in diffusion network inference over the past few years.This paper investigates and summarizes the representative research in the field of diffusion network inference.Finally,this paper analyzes the existing problems of diffusion network inference and provides a new perspective on this field.
Generation Algorithm of Temporal Networks with Anchor Communities
ZHENG Shuwen, WANG Chaokun
Computer Science. 2024, 51 (1): 113-123.  doi:10.11896/jsjkx.231000153
Abstract PDF(4552KB) ( 1971 )   
References | Related Articles | Metrics
Algorithms for network analysis tasks require synthetic graph datasets to evaluate their effectiveness and efficiency.Real-world graph data not only possess topological features such as community structures,but also contain temporal information revealing evolutionary semantics.Nodes of real-world communities may interact with each other within a specific anchor time window.However,existing graph generation methods suffer from some limitations.Most of them concentrate on either static community structures or temporal graphs without community structures,appearing weak in generating communities active during an anchor time period.To surmount their weakness,this paper introduces the concept of anchor community to depict frequent interactions between a group of nodes within an anchor time window.Then it proposes an algorithm to synthesize general temporal networks based on the distribution probability generation model,and further proposes an efficient generation algorithm of temporal networks with anchor communities(GTN-AC),allowing configuration input such as anchor time windows as well as specified distributions of degree and timestamp.Extensive experimental results indicate that compared with other baseline methods,GTN-AC has a faster generation speed while ensuring preferable generation quality.
Parallel Transaction Execution Models Under Permissioned Blockchains
DONG Hao, ZHAO Hengtai, WANG Ziyao, YUAN Ye, ZHANG Aoqian
Computer Science. 2024, 51 (1): 124-132.  doi:10.11896/jsjkx.230800201
Abstract PDF(1602KB) ( 1783 )   
References | Related Articles | Metrics
Most existing permissioned blockchain systems adopt serial transaction execution methods,which cannot take advantage of the high performance of multi-core processors.This serial method will be a performance bottleneck in permissioned blockchains with high performance consensus algorithms.To reduce execution time of transactions in permissioned block-chains with order-execute-validate architecture,two transaction concurrency models are proposed.First,an address table-based parallel execution model is proposed that maps the read and write sets of transactions to the address table through static analysis and constructs a scheduling graph using the address table to achieve parallel execution of transactions without data conflicts.Second,a parallel execution model based on a multi-version timestamp ordering algorithm is proposed,in which the leader node uses a multi-version timestamp ordering algorithm to pre-execute transactions in parallel and stores the scheduling graph into the block in the form of transaction dependency triplets.All validation nodes schedule via transaction dependency triplets to achieve parallel execution of transactions under the premise of consistency.Finally,the two parallel transaction execution models designed in this paper are implemented in Tendermint,and a performance experiment during the transaction execution phase and a performance experiment with multiple nodes are conducted.Experimental results show that the above models reduce the transaction execution time by 68.6% and 28.5% with a single node and 8 threads,and increase the blockchain throughput by about 43.4% and 19.5% with 4 peer nodes and 8 threads per node,respectively.
Interest Capturing Recommendation Based on Knowledge Graph
JIN Yu, CHEN Hongmei, LUO Chuan
Computer Science. 2024, 51 (1): 133-142.  doi:10.11896/jsjkx.230500133
Abstract PDF(2531KB) ( 1916 )   
References | Related Articles | Metrics
As a kind of auxiliary information,knowledge graph can provide more context information and semantic association information for the recommendation system,thereby improving the accuracy and interpretability of the recommendation.By mapping items into knowledge graphs,recommender systems can inject external knowledge learned from knowledge graphs into user and item representations,thereby enhancing user and item representations.However,when learning user preferences,the know-ledge graph recommendation based on graph neural network mainly utilizes knowledge information such as attribute and relationship information in the knowledge graph through project entities.Since user nodes are not directly connected to the knowledge graph,different relational and attribute information are semantically independent and lack correlation regarding user preferences.It is difficult for the recommendation based on the knowledge graph to accurately capture user’s fine-grained preferences based on the information in the knowledge graph.Therefore,to address the difficulty in capturing users’ fine-grained interests,this paper proposes an interest-capturing recommendation algorithm based on a knowledge graph(KGICR).The algorithm leverages the relational and attribute information in knowledge graphs to learn user interests and improve the embedding representations of users and items.To fully utilize the relational information in the knowledge graph,a relational interest module is designed to learn users’ fine-grained interests in different relations.This module represents each interest as a combination of relation vectors in the knowledge graph and employs a graph convolutional neural network to transfer user interests in the user-item graph and the knowledge graph to learn user and item embedding representations.Furthermore,an attribute interest module is also designed to learn users’ fine-grained interests in different attributes.This module matches users and items with similar attributes by splitting and embedding and uses a similar method to the relational interest module for message propagation.Finally,experiments are conducted on two benchmark datasets,and the experimental results demonstrate the effectiveness and feasibility of the proposed method.
Pre-training of Heterogeneous Graph Neural Networks for Multi-label Document Classification
WU Jiawei, FANG Quan, HU Jun, QIAN Shengsheng
Computer Science. 2024, 51 (1): 143-149.  doi:10.11896/jsjkx.230600079
Abstract PDF(2057KB) ( 1849 )   
References | Related Articles | Metrics
Multi-label document classification aims to associate document instances with relevant labels,which has received increasing research attention in recent years.Existing multi-label document classification methods attempt to explore the fusion of information beyond the text,such as document metadata or label structure.However,these methods either simply use the semantic information of metadata or do not consider the long-tail distribution of labels,thereby ignoring higher-order relationships between documents and their metadata and the distribution pattern of labels,which affects the accuracy of multi-label document classification.Therefore,this paper proposes a new multi-label document classification method based on the pre-training of hete-rogeneous graph neural networks.The method constructs a heterogeneous graph based on documents and their metadata,adopts two contrastive pre-training methods to capture the relationship between documents and their metadata,and improves the accuracy of multi-label document classification by balancing the problem of long-tail distribution of labels through a loss function.Experimental results on the benchmark dataset show that the proposed method outperforms Transformer BertXML and MATCH by 8%,4.75%,1.3%,respectively.
Computer Graphics & Multimedia
Survey of Image Data Augmentation Techniques Based on Deep Learning
SUN Shukui, FAN Jing, SUN Zhongqing, QU Jinshuai, DAI Tingting
Computer Science. 2024, 51 (1): 150-167.  doi:10.11896/jsjkx.230500103
Abstract PDF(3382KB) ( 1551 )   
References | Related Articles | Metrics
In recent years,deep learning has demonstrated excellent performance in many computer vision tasks such as image classification,object detection,and image segmentation.Deep neural networks usually rely on a large amount of training data to avoid overfitting,so excellent performance is inseparable from the support of massive image data.However,in many real-world applications,it is often difficult to obtain sufficient image data,and data collection is also expensive and time-consuming.The emergence of image data augmentation has effectively alleviated the problem of insufficient data,and as an effective way to increasethe quantity,quality,and diversity of training data,data augmentation has become a necessary component for the successful application of deep learning models on image data.Understanding existing algorithms can help choose appropriate methods and develop new algorithms.This paper elaborates on the research motivation of image data augmentation,systematically classifies numerous data augmentation algorithms,analyzes each type of data augmentation algorithm in detail,and then points out some considerations in the design of data augmentation algorithms and their application scope.The effectiveness of data augmentation is demonstrated through three computer vision tasks,and finally,this paper summarizes and proposes some prospects for future research directions of data augmentation.
Multimodal Pre-training Method for Multi-view Contrastive Learning and Semantic Enhancement
TANG Jia, GUO Yan, YE Mingwei, WU Guixing
Computer Science. 2024, 51 (1): 168-174.  doi:10.11896/jsjkx.230700084
Abstract PDF(2765KB) ( 1561 )   
References | Related Articles | Metrics
The visual language pretraining(VLP) model has shown impressive performance on multimodal tasks through con-trastive learning and other methods.However,existing research has overlooked the benefits of multi-view descriptions,andthe importance of semantics and grammar.To address this issue,this paper proposes multi-view learning and semantic enhancement for multimodal pre-training(MulSE),which consists of the following three main components:1)introducing multi-view contrastive learning with a generator in the fused encoder model;2)proposing multimodal text reordering as a novel self-supervised visual language pretraining task;3)increasing and exploring the optimal MLM masking ratio,maximizing the ability to use visual information.By improving the pretraining task and employing multiple optimal strategies,our experiments demonstrate that MulSE enhances intra-modal and inter-modal understanding,improves the comprehension of syntax and semantics within text.With only 4M pre-training data volume,it achieves the results of previous large datasets in the graphic retrieval task,and the valuation result on visual question-answering and visual implicative tasks outperforms the previous comprehension VLP models.
Method of Infrared Small Target Detection Based on Multi-depth Feature Connection
WANG Weijia, XIONG Wenzhuo, ZHU Shengjie, SONG Ce, SUN He, SONG Yulong
Computer Science. 2024, 51 (1): 175-183.  doi:10.11896/jsjkx.230200037
Abstract PDF(4105KB) ( 1486 )   
References | Related Articles | Metrics
Small infrared targets have the characteristics of a small number of pixels and a complex background,which leads to the problems of low detection accuracy and high time-consumption.This paper proposes a multi-depth feature connection network.Firstly,the model proposes a multi-depth cross-connect backbone to increase feature transfer between different layers and enhance feature extraction capabilities.Secondly,an attention-guided pyramid structure is designed to enhance the deep features and separate the background from the target.Thirdly,an asymmetric fusion decoding structure is proposed to enhance the preservation of texture information and position information in decoding.Finally,the model introduces point regression loss to get the center coordinates.The proposed network model is trained and tested on the SIRST dataset and the self-built infrared small target dataset.Experimental results show that compared with existing data-driven and model-driven algorithms,the proposed model has higher detection accuracy and faster speed in complex scenes.Compared with the suboptimal model,the average precision of the model is improved by 5.41%,and the detection speed reaches 100.8 FPS.
Weighted-loss-based Up-sampling for Point Cloud Occupancy Map Video
CHEN Hang, LI Li, LIU Dong, LI Houqiang
Computer Science. 2024, 51 (1): 184-189.  doi:10.11896/jsjkx.230600161
Abstract PDF(2326KB) ( 1450 )   
References | Related Articles | Metrics
In video-based point cloud compression(V-PCC),a 3D point cloud is divided into hundreds of patches and then mapped onto a 2D grid,generating a texture video that captures texture information and a geometry video that captures geometry information.Meanwhile,an occupancy map video is also generated to record whether each pixel in the former two videos corresponds to a point in the reconstructed point cloud.Therefore,the quality of the occupancy map video is directly linked to the quality of the reconstructed point cloud.To save bit cost,the occupancy map video is down-sampled at the encoder and up-sampled with a simplistic method at the decoder.This paper aims to use a deep learning-based up-sampling method to replace the simple up-sampling method in the original V-PCC to improve the quality of the up-sampled occupancy map videos as well as that of the reconstructed point cloud.A weighted distortion loss function in the network training process is introduced to remove the normal points as few as possible while removing the noisy points as many as possible when reconstructing a point cloud.Experimental results show that the proposed method can significantly improve the subjective and objective performances of the V-PCC.
Raindrop In-Situ Captured Benchmark Image Dataset and Evaluation
CHEN Tianyi, XUE Wen, QUAN Yuhui, XU Yong
Computer Science. 2024, 51 (1): 190-197.  doi:10.11896/jsjkx.230500125
Abstract PDF(4328KB) ( 1487 )   
References | Related Articles | Metrics
When taking photos through glass windows in rainy days,the raindrops adhered to glass surfaces are usually presented in the images,which not only degrade the visibility of the image but also prevent many computer vision algorithms from functioning properly.The research on raindrop removal is a scientific research to remove raindrops from such rainy images.The singleimage raindrop removal research presents significant challenges due to the diverse and unique forms of raindrops found in nature.The varying transparency of raindrops further complicates the task of removing raindrop artifacts and degrades the imaging quality of background scenes,adversely impacting the performance of existing raindrop removal algorithms.To facilitate a comprehensive understanding of this research area,this paper provides a detailed introduction to single-image raindrop removal,covering two main aspects:single-image raindrop removal algorithms and joint raindrop removal algorithms for single images.Additionally,a summary and evaluation of existing algorithms in this field are presented.However,the performance of the algorithm is often li-mited by the quality and quantity of the dataset in deep learning based methods,but in existing raindrop datasets,common situations such as low-quality raindrop images and insufficient image quantities exist.In existing raindrop datasets,there are common situations such as poor quality of raindrop images and insufficient number of raindrop images.This paper proposes a higher education megacenter(HEMC) dataset.Camera shake,window reflections and other external disturbances are avoided as much as possible thus improving the image quality of the training set and accuracy of the test set and indirectly improving the performance of the raindrop removal methods.HEMC is evaluated in various aspects using competent visual effects and objective metrics.Experimental results show the diversity of the raindrop images in HEMC and stability of the objective metrics.In addition,the results verify the universality and stability of the HEMC in the raindrop removal methods.
Seal Removal Based on Generative Adversarial Gated Convolutional Network
WU Guibin, YANG Zongyuan, XIONG Yongping, ZHANG Xing, WANG Wei
Computer Science. 2024, 51 (1): 198-206.  doi:10.11896/jsjkx.230500232
Abstract PDF(4303KB) ( 1520 )   
References | Related Articles | Metrics
Seals on invoices and documents seriously affect the accuracy of text recognition,so seal elimination techniques play an important role in the pre-processing of document analysis,and document enhancement.However,threshold-based methods and deep learning-based methods suffer from incomplete seal elimination and modification of background pixels.Thus,this paper proposes a two-stage seal elimination network,SealErase.The first stage is a U-shaped segmentation network for generating bina-rized masks with seal position,and the second stage is an inpainting network for refined seal elimination.Due to the lack of available public paired datasets for seal elimination,existing methods cannot design pixel-level evaluation metrics to measure the quality of the generated images.Moreover,training the neural network using paired training sets can effectively improve the performance of the network.To this end,this paper constructs a high-simulated seal elimination dataset containing 8 000 samples,taking into account the generalisation to real scenes and the robustness to noise.The seals are divided into two types:seals in real document images and synthetic seals.In order to objectively evaluate the performance of SealErase,it devises a comprehensive evaluation metric based on the image generation quality and the recognition accuracy of characters obscured by seals to evaluate the elimination performance of the SealErase network.The existing seal elimination methods are compared on the seal elimination dataset,and the experimental results show that the SealErase network improve the peak signal to noise ratio by 26.79% and the mean structural similarity by 4.48% in the evaluation metric of image generation quality compared to the state-of-the-art methods.After seal elimination by SealErase network,the accuracy of recognition of characters obscured by seals is improved by 38.86%.Experimental results show that SealErase is equally effective in eliminating seals and preserving the obscured characters in real scenes.
Error-bounded Compatible High-order Remeshing
ZHANG Wenxiang, GUO Jiapeng, FU Xiaoming
Computer Science. 2024, 51 (1): 207-214.  doi:10.11896/jsjkx.230700116
Abstract PDF(4120KB) ( 1460 )   
References | Related Articles | Metrics
This paper proposes a method to construct high-quality and compatible high-order surface meshes with bounded approximation errors.Given two closed,oriented,and topologically equivalent surfaces and a sparse set of corresponding landmarks,the proposed method contains two steps:(1)generate compatible high-order meshes with bounded approximation errors and(2)reduce mesh complexity while ensuring that approximation errors are always bounded,and reduce the distortion between the compatible meshes and approximation errors with the original meshes by optimizing the control vertices.The first step is to generate compatible linear meshes with bounded approximation errors,and then upgrade them to high-order meshes.In the second step,the mesh complexity is effectively reduced by iteratively performing an edge-based remeshing and increasing the compatible target edge lengths.The Jacobian matrix of the mapping between 3D Bézier triangles is derived from tangent space,so that the distortion energy can be effectively optimized.By optimizing the distortion energy and approximation errors energy,the distortion between compatible meshes and approximation errors are effectively reduced.Tests on various pairs of complex models demonstrate the efficacy and practicability of our method for constructing high-quality compatible high-order meshes with bounded approximation errors.
B-spline Functional Model of Terrestrial Sunshape Based on Measured Data
SHEN Tong, ZHAO Le, FENG Jieqing
Computer Science. 2024, 51 (1): 215-224.  doi:10.11896/jsjkx.230700209
Abstract PDF(5255KB) ( 1448 )   
References | Related Articles | Metrics
The function describing the distribution of solar radiative energy received on the ground is called the surface sunshape model.It is important for accurate simulation of the distribution of radiative flux density on the receiver in solar power tower.The percentage of halo radiative energy to the total solar radiative energy is called the CircumSolar Ratio(CSR),which is a key para-meter in the surface sunshape model.At present,the commonly used surface sunshape models have drawbacks of low accuracy,CSR misalignment,discontinuity,and not being integrated analytically.To address these problems,a new sunshape model in terms of tensor product B-spline function is proposed based on observation dataset.Firstly,the two observation datasets are processed via data cleaning,de-noise,normalization,average,and data concatenation.As a result,84 sets of data with different CSR values are obtained.Each set of data corresponds a solar radiative solar energy scanning profile,and varies with incident angle θ.Then,the data set of CSR=0.005 with the most drastic change is chosen as the sample case for constrained B-spline function fitting,whose knot vector and number of control coefficients are determined through differential evolution algorithm and experiments,respectively.Then,the other 83 sets of data corresponding to 83 CSR values are fitted using the above knot vector and the number of control coefficients.Finally,the 84 univariate B-spline functions are adopted as inputs,and CSR value is used as variable to perform B-spline fitting on their control coefficients.The knot vector and the number of control vertices are still determined using the above methods.As a result,a surface sunshape model is obtained,which is in terms of tensor product B-spline function with 12×15 control coefficients,and variables CSR and θ.Compared with existing models,the proposed B-spline function model is C2 continuous,which has the advantages of CSR alignment,high fitting accuracy,and analytical integration of radiative energy distribution.
Local Progressive and Iterative Approximation for Least Squares B-spline Curve and Surface Fitting
GAO Yang, JIANG Yini, LIN Hongwei
Computer Science. 2024, 51 (1): 225-232.  doi:10.11896/jsjkx.230700152
Abstract PDF(2555KB) ( 1446 )   
References | Related Articles | Metrics
Progressive and iterative approximation for least squares B-spline curve and surface fitting(LSPIA),as an effective method for fitting large data,has attracted the attention of many researchers.To address the problem that the LSPIA algorithm is less effective in fitting local data points,a local LSPIA algorithm,called LOCAL-LSPIA,is proposed.Firstly,the initial curve is given and some of the data points are selected from the given data points.Then,the control points to be adjusted are selected on the initial curve.Finally,LOCAL-LSPIA is used to generate a series of locally varying fitted curves(surfaces) by iteratively adjusting this part of the control points and ensuring that the limits of the generated curves(surfaces) are the least-squares results of fitting some of the data points while adjusting only this part of the control points.Experimental results on multiple curve-surface fitting show that the LOCAL-LSPIA algorithm requires fewer steps and shorter time than the LSPIA algorithm to achieve the same local fitting accuracy.Therefore,LOCAL-LSPIA is effective and has a faster convergence rate than LSPIA algorithm in the case of fitting local data.
FeaEM:Feature Enhancement-based Method for Weakly Supervised Salient Object Detection via Multiple Pseudo Labels
SHI Dianxi, LIU Yangyang, SONG Linna, TAN Jiefu, ZHOU Chenlei, ZHANG Yi
Computer Science. 2024, 51 (1): 233-242.  doi:10.11896/jsjkx.230500035
Abstract PDF(4005KB) ( 1484 )   
References | Related Articles | Metrics
Salient object detection is designed to detect the most obvious areas of an image.The traditional method based on single label is inevitably affected by the refinement algorithm and shows bias characteristics,which further affects the detection perfor-mance of saliency network.To solve this problem,based on the structure of multi-instruction filter,this paper proposes a feature enhancement-based method for weakly supervised salient object detection via multiple pseudo labels(FeaEM),which integrates more comprehensive and accurate saliency cues from multiple labels to effectively improve the performance of object detection.The core of FeaEM method is to introduce a new multi-instruction filter structure and use multiple pseudo-labels to avoid the negative effects of a single label.By introducing the feature selection mechanism into the instruction filter,more accurate significance clues are extracted and filtered from the noise false label,so as to learn more effective representative features.At the same time,the existing weak supervised object detection methods are very sensitive to the scale of the input image,and the prediction structure of the input of different sizes of the same image has a large deviation.The scale feature fusion mechanism is introduced to ensure that the output of the same image of different sizes is consistent,so as to effectively improve the scale generalization ability of the model.A large number of experiments on multiple data sets show that the FeaEM method proposed in this paper is superior to the most representative methods.
Weakly Supervised Video Anomaly Detection Based on Dual Dynamic Memory Network
ZHOU Wenhao, HU Hongtao, CHEN Xu, ZHAO Chunhui
Computer Science. 2024, 51 (1): 243-251.  doi:10.11896/jsjkx.230300134
Abstract PDF(3019KB) ( 1486 )   
References | Related Articles | Metrics
Video anomaly detection aims to identify frame-level abnormal behaviors from the video.The weakly supervised me-thods use both normal and abnormal video supplemented by the video-level labels for training,which show better performance than the unsupervised methods.However,the current weakly supervised video anomaly detection methods cannot record the long-term mode of the video.At the same time,some methods use the information of future frames to achieve better detection results,which makes it impossible to apply online.For this reason,a weakly supervised video anomaly detection method based on dual dynamic memory network is proposed for the first time in this paper.The memory network containing two memory modules is designed to record the normal and abnormal modes of video in the long term respectively.In order to realize the collaborative update of video features and memory items,the read operation is used to enhance the features of video frames based on the memory items in the memory module,and the write operation is used to update the contents of memory items based on the features of video frames.At the same time,the number of memory items will be dynamically adjusted during the training process to meet the needs of different video monitoring scenarios.In training,a modality separation loss is proposed to increase the discrimination between memory items.During the test,only memory items are needed without the participation of future video frames,so that accurate online detection can be achieved.Experimental results on two public weakly supervised video anomaly detection datasets show that the proposed method is superior to all online application methods,and also has strong competitiveness compared with offline application methods.
Artificial Intelligence
Survey on Domain Limited Relation Extraction
HOU Jing, DENG Xiaomei, HAN Pengwu
Computer Science. 2024, 51 (1): 252-265.  doi:10.11896/jsjkx.230200100
Abstract PDF(2833KB) ( 1340 )   
References | Related Articles | Metrics
Domain-limited relation extraction aims to capture essential text information from the text under the premise of predefined entity types and relation types,and mostly uses triples composed of head and tail entities and relations as structured information representation.As one of the important tasks of information extraction,it plays an important role in question answering and information retrieval.Based on its concepts and task paradigms,this paper systematically sorts out the technical methods in domain-limited relation extraction under the background of deep learning.Whether the entity is visible or not,it is divided into relation classification and triplet extraction.According to the performance characteristics of the task,the former can be divided into relation classification under supervised conditions,few-shot relation classification,and relation classification under distant supervision.This paper discusses and analyzes the commonly used technical methods and their advantages and disadvantages in the above tasks.Finally,we summarize the development potential and existing challenges of relation extraction technology in low-resource,multimodal and other situations that are closer to the real world.
Fairness Metrics of Machine Learning:Review of Status,Challenges and Future Directions
ZHANG Wenqiong, LI Yun
Computer Science. 2024, 51 (1): 266-272.  doi:10.11896/jsjkx.230500224
Abstract PDF(1998KB) ( 1284 )   
References | Related Articles | Metrics
With the increasing popularity of machine learning applications,fairness of machine learning has attracted widespread attention from academia and industry,and has become an important component of trust-worthy artificial intelligence.To evaluate and improve the fairness of machine learning applications,a series of fairness metrics have been proposed by researchers.These metrics help to ensure fair decision-making of machine learning models among different individuals and groups,and provide gui-dance for improving and optimizing the model.However,there is still no consensus on the difference and correlation between these metrics,which are not clearly divided in different scenarios and tasks.This means that these fairness metrics lack a comprehensive classification system.In this paper,the fairness metrics are comprehensively organized and classified.Starting from the mathematical definition of these metrics,they are divided into two categories according to whether they are based on probability statistics.The two types of metrics are then further divided and elaborated separately.In order to facilitate readers’ understan-ding and application,combined with a practical case,the advantages and challenges of various metrics are pointed out in terms of application scenarios and implementation conditions,and the relationship between metrics is also discussed in conjunction with mathematical concepts,and possible future research directions are prospected.
Survey on Generative Diffusion Model
YAN Zhihao, ZHOU Zhangbing, LI Xiaocui
Computer Science. 2024, 51 (1): 273-283.  doi:10.11896/jsjkx.230300057
Abstract PDF(2816KB) ( 1440 )   
References | Related Articles | Metrics
Diffusion models have shown high-quality sample generation ability in the field of generative models,and constantly set new records for image generation evaluation indicator FID scores since their introduction,and has become a research hotspot in this field.However,related reviews of this kind are scarce in China.Therefore,this paper aims to summarize and analyze the research on related diffusion generative models.Firstly,it analyzes the related derivative models in each basic diffusion model,which focus on optimizing internal algorithms and efficient sampling,by discussing the characteristics and principles of three common models:denoising diffusion probabilistic model,score-based diffusion generative model,and diffusion generative model based on random differential equations.Secondly,it summarizes the current applications of diffusion models in computer vision,natural language processing,time series,multimodal,and interdisciplinary fields.Finally,based on the above discussion,relevant suggestions for the existing limitations of diffusion generative models are proposed,such as long sampling times and multiple sampling steps,and a research direction for the future development of diffusion generative models is provided based on previous studies.
Automated Kaomoji Extraction Based on Large-scale Danmaku Texts
MAO Xin, LEI Zhanyao, QI Zhengwei
Computer Science. 2024, 51 (1): 284-294.  doi:10.11896/jsjkx.230400120
Abstract PDF(2136KB) ( 1311 )   
References | Related Articles | Metrics
As a new type of emoticon symbol that emerged in the Internet age,kaomoji not only enjoys popularity among Internet users and mainstream social media but also has indispensable value in emotional expression,cultural promotion,and other aspects.Considering that kaomoji carries rich semantic and emotional information,studying them in the context of Internet texts can promote the analysis and understanding of such texts,thus improving the effectiveness of various natural language processing tasks.Detecting and extracting kaomoji from texts are the primary steps in analyzing texts with kaomoji.However,due to the flexible structure,diverse types,and rapid evolution of kaomoji,most existing works lack a comprehensive analysis of kaomoji,resulting in limitations such as low accuracy,difficulty in determining boundaries,and poor timeliness.In this paper,through an in-depth analysis of kaomoji features,a kaomoji detection and extraction algorithm called Emoly based on a large-scale danmaku text dataset is proposed.It extracts preliminary candidate strings through preprocessing methods,combines various improved statistical indicators and filtering rules to select the final candidate strings,and ranks them based on text similarity to produce the final results.Experimental results show that the Emoly algorithm achieves a recall rate of 91% in a dataset of millions of danmaku texts,effectively and accurately detecte and extracte kaomoji from the texts.It demonstrates robustness,superiority,and generality.Additionally,the proposed algorithm provides new ideas and methods for tasks such as Chinese word segmentation,sentiment analysis,and input method dictionary updates,offering broad application value.
Construction and Compounding of a Class of Regular Standard Contradictions in Propositional Logic
ZANG Hui, HE Xingxing, WANG Chenglong, LI Yingfang, LI Tianrui
Computer Science. 2024, 51 (1): 295-300.  doi:10.11896/jsjkx.230600009
Abstract PDF(1504KB) ( 1391 )   
References | Related Articles | Metrics
The resolution principle is a brief,reliable and complete inference rule in automated reasoning and the deductive theory of standard contradiction separation is an extension of binary resolution.Since the structure of the standard contradiction is very complex and there are few existing contradiction types and generation strategies,this paper first obtains multiple compound stra-tegies for generating new contradictions by compounding two or more contradictions based on the standard contradiction separation deduction theory in propositional logic.Then a kind of special standard contradiction structure,i.e.,composite regular stan-dard contradiction,is put forward to enrich the structural features of contradictions.Furthermore,the expandability of the diffe-rent clauses of the new contradictions obtained by compounding is discussed,which leads to corresponding literals adding strategies.Finally,algorithms for generating contradictions are proposed to provide a reference for further implementing the generation of new contradictions on computers.
Curriculum Learning Framework Based on Reinforcement Learning in Sparse HeterogeneousMulti-agent Environments
LUO Ruiqing, ZENG Kun, ZHANG Xinjing
Computer Science. 2024, 51 (1): 301-309.  doi:10.11896/jsjkx.230500146
Abstract PDF(3043KB) ( 1329 )   
References | Related Articles | Metrics
The battlefield of modern warfare is large and has a variety of units,and the use of multi-agent reinforcement learning(MARL) in battlefield simulation can enhance the collaborative decision-making ability among combat units and thus improve combat effectiveness.Current applications of Multi-agent reinforcement learning(MARL) in military simulation often rely on two simplifications:the homogeneity of agents and dense distribution of combat units,real-world warfare scenarios may not always adhere to these assumptions and may include various heterogeneous agents and sparsely distributed combat units.In order to explore the potential applications of reinforcement learning in a wider range of scenarios,this paper proposes improvements in these two aspects.Firstly,a multi-scale multi-agent amphibious landing environment(M2ALE) is designed to address the simplifications,incorporating various heterogeneous agents and scenarios with sparsely distributed combat units.These complex settings exacerbate the exploration difficulty and non-stationarity of multi-agent environments,making it difficult to train with commonly used multi-agent algorithms.Secondly,a heterogeneous multi-agent curriculum learning framework(HMACL) is proposed to address the challenges in the M2ALE environment.HMACL consists of three modules:source task generating(STG) module,class policy improving(CPI) module,and Trainer module.The STG module generates source tasks to guide agent training,while the CPI module proposes a class-based parameter sharing strategy to mitigate the non-stationarity of the multi-agent system and implement parameter sharing in a heterogeneous agent system.The Trainer module trains the latest policy using any MARL algorithm with the source tasks generated by the STG and the latest policy from the CPI.HMACL can alleviate the exploration difficulty and non-stationarity issues of commonly used MARL algorithms in the M2ALE environment and guide the learning process of the multi-agent system.Experiments show that using HMACL significantly improves the sampling efficiency and final performance of MARL algorithms in the M2ALE environment.
Knowledge Graph Completion Algorithm Based on Generative Adversarial Network and Positiveand Unlabeled Learning
HU Binhao, ZHANG Jianpeng, CHEN Hongchang
Computer Science. 2024, 51 (1): 310-315.  doi:10.11896/jsjkx.230300006
Abstract PDF(1692KB) ( 1356 )   
References | Related Articles | Metrics
With the widespread application of knowledge graphs,the majority of real-world knowledge graphs suffer from the problem of incompleteness,which hinders their practical applications.As a result,it makes knowledge graph completion become a hot topic in the field of knowledge graph.However,most existing methods focus on the design of scoring functions,with only a few studies paying attention to negative sampling strategies.In the research of knowledge graph completion algorithms which aims at improving negative sampling,the methods based on generative adversarial networks(GANs) have achieved significant progress.Nonetheless,existing studies have not addressed the false negative issue,meaning that generated negative samples may contain actual facts.To address this issue,this paper proposes a knowledge graph completion algorithm based on GAN and positive-unlabeled learning.In the proposed method,GANs are utilized to generate unlabeled samples,while positive unlabeled lear-ning is employed to alleviate the false negative problem.Extensive experiments on benchmark datasets demonstrate the effectiveness and accuracy of the proposed algorithm.
Information Security
Survey of Vulnerability Benchmark Construction Technique
MA Zongshuai, WU Zehui, YAN Chenyu, WEI Qiang
Computer Science. 2024, 51 (1): 316-326.  doi:10.11896/jsjkx.230300209
Abstract PDF(2535KB) ( 1708 )   
References | Related Articles | Metrics
The development of technology for software vulnerability analysis has led to the widespread use of various techniques and tools for discovering vulnerabilities.Nevertheless,assessing the capability boundary of these techniques,methods,and tools remains a fundamental problem in this field.A vulnerability benchmark for capability assessment plays a pivotal role in solving this problem.The purpose of this paper is to review representative results related to the construction of benchmark test sets over the past 20 years.Firstly,it explains the developmental history of vulnerability benchmark from an automation perspective.Then,it classifies the techniques for constructing vulnerability benchmark and provide a general process model,explaining the ideas and processes of different construction methods and their limitations.Lastly,the limitations of current research are summarized and the future research is prospected.
Cryptocurrency Mining Malware Detection Method Based on Sample Embedding
FU Jianming, JIANG Yuqian, HE Jia, ZHENG Rui, SURI Guga, PENG Guojun
Computer Science. 2024, 51 (1): 327-334.  doi:10.11896/jsjkx.230100116
Abstract PDF(2203KB) ( 1721 )   
References | Related Articles | Metrics
Due to its high profitability and anonymity,cryptocurrency mining malware poses a great threat and loss to computer users.In order to confront the threat posed by mining malware,machine learning detectors based on software static features usually select a single type of static features,or integrate the detection results of different kinds of static features through integrated learning,ignoring the internal relationship between different kinds of static features,and its detection rate remains to be discussed.This paper starts from the internal hierarchical relationship of mining malware.It extracts basic blocks,control flow graphs and function call graphs of samples as static features,trains the three-layer model to embed these features into the vector respectively,and gradually gathers the features from the bottom to the top,and finally sends top features to the classifier to detect mining malware.To simulate the detection situation in real world,it first trains the model on a relatively smaller experimental data set,and then tests the performance of the model on another much larger data set.Experiment results show that the perfor-mance of th proposed method is much better than that of some machine learning models proposed in recent years.The recall rate and accuracy rate of three-layer-embedding model is more than 7% and 3% higher than that of other models,respectively.
Defense Method Against Backdoor Attack in Federated Learning for Industrial Scenarios
WANG Xun, XU Fangmin, ZHAO Chenglin, LIU Hongfu
Computer Science. 2024, 51 (1): 335-344.  doi:10.11896/jsjkx.230500024
Abstract PDF(3649KB) ( 1686 )   
References | Related Articles | Metrics
As a machine learning method which can solve the problem of isolated data island and share data resources,the characteristics of federated learning are consistent with the requirements of intelligent development of industrial equipment,so that it has been applied in many industries.However,the attack methods against the federated learning architecture are constantly updated.Backdoor attack,as one of the representatives of attack methods,has the characteristics of concealment and destruction.While traditional defense schemes often fail to play a role in the federated learning framework or have insufficient ability to prevent early backdoor attacks.Therefore,it is of great significance to research the backdoor defense scheme which can be applied to the federated learning architecture.The backdoor diagnosis scheme for federated learning architecture is proposed,which can reconstruct the backdoor trigger by using the characteristics of the backdoor model without data.This scheme can realize accurate identification and removal of the backdoor model,and achieve the goal of global model backdoor defense.In addition,a new detection mecha-nism is proposed to realize the back door detection of early models.On this basis,the model judgment algorithm is optimized,and the accuracy and speed are both improved through the early exiting united judgment mode.
Lightweight Differential Privacy Federated Learning Based on Gradient Dropout
WANG Zhousheng, YANG Geng, DAI Hua
Computer Science. 2024, 51 (1): 345-354.  doi:10.11896/jsjkx.230400123
Abstract PDF(4882KB) ( 1671 )   
References | Related Articles | Metrics
To address the privacy issues in the traditional machine learning,federated learning has received widespread attention and research as the first collaborative online learning solution,that does not require users to upload real data but only model updates.However,it requires users to train locally and upload model updates that may still contain sensitive information,which raises new privacy concerns.At the same time,the fact that the complete training must be performed locally by the user makes the computational and communication overheads particularly critical.So,there is also an urgent need for a lightweight federated lear-ning architecture.In this paper,a federated learning framework with differential privacy mechanism is used,for further privacy requirements.In addition,a Fisher information matrix-based Dropout mechanism,FisherDropout,is proposed for the first time for optimal selection of each dimension in the gradients updated by client-side.This mechanism greatly saves computing cost,communication cost,and privacy budget,and establishes a federated learning framework with both privacy and lightweight advantages.Extensive experiments on real-world datasets demonstrate the effectiveness of the scheme.Experimental results show that the FisherDropout mechanism can save 76.8%~83.6% of communication overhead and 23.0%~26.2% of computational overhead in the best case compared with other federated learning frameworks,and also has outstanding advantages in balancing privacy and usability in differential privacy.
Black-box Graph Adversarial Attacks Based on Topology and Feature Fusion
GUO Yuxing, YAO Kaixuan, WANG Zhiqiang, WEN Liangliang, LIANG Jiye
Computer Science. 2024, 51 (1): 355-362.  doi:10.11896/jsjkx.230600127
Abstract PDF(2793KB) ( 199 )   
References | Related Articles | Metrics
In the era of big data,the close relationship between data is widespread,graph data analysis and mining have become an important development trend of big data technology.In recent years,as a novel type of graph representation learning tool,graph neural networks(GNNs) have extensively attracted academic and industry attention.At present,GNNs have achieved great success in various real-world applications.Lately,many researchers believe that the security and confidence level of artificial intelligence is a vital point,a lot of work focuses on deep learning adversarial attacks on Euclidean structure data such as images now.This paper mainly focuses on the black-box adversarial attack problem of graph data,which is a typical non-European structure.When the graph neural network model information(structure and parameters) is unknown,the imperceptible non-random perturbation of graph data is carried out to realize the adversarial attack on the model,and the performance of the model decreases.Applying an imperceptible no-random perturbation to the graph structure or node attributes can easily fool GNNs.The method based on node-selected black-box adversarial attack is vital,but similar methods are only taking account of the topology information of nodes instead of fully considering the information of node features,so in this paper,we propose a black-box adversarial attack for graph neural network via topology and feature fusion on citation network.In the process of selecting important nodes,this method fuses the features information and topology information of graph nodes,so that the selected nodes are significant to the graph data in both features and topology.Attackers apply small perturbations on node attributes that nodes are selected by our method and this attack has a great impact on the model.Moreover,experiments on three classic datasets show that the proposed attack strategy can remarkably reduce the performance of the model without access to model parameters and is better than the baseline methods.
Two-factor Authentication Scheme for Blind Cloud Storage System Based on Password and SmartCard
WANG Yi, HU Xuexian, WEI Jianghong
Computer Science. 2024, 51 (1): 363-370.  doi:10.11896/jsjkx.230700090
Abstract PDF(2132KB) ( 2991 )   
References | Related Articles | Metrics
Aiming at the demand for large-scale data storage,how to securely realize remote access to user data using cloud sto-rage technologies while retaining data portability and security is a research hotspot at present.In USENIX Security 2022,Chen et al.proposed an efficient and portable blind cloud storage scheme for the case where users just hold one low-entropy password.However,the scheme inevitably inherits the weakness of passwords unresistant to online dictionary attack.To compensate the security shortage of password-only authentication,this paper designs a two-factor authentication scheme for blind cloud storage system based on password and smart card.Experimental results show that the proposed scheme not only realizes portability,deployability and blind cloud storage,but also achieves a higher level of security over password-only authentication schemes with equivalently computation and communication efficiency.