
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK


-
Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel
王立梅, 韩林睿, 杜祖炜, 郑日, 时建中, 刘奕群. 基于大语言模型的移动应用隐私政策合规性检测方法[J]. 计算机科学, 2025, 52(8): 1-16.
WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun. Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel[J]. Computer Science, 2025, 52(8): 1-16. - WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun
- Computer Science. 2025, 52 (8): 1-16. doi:10.11896/jsjkx.250300156
-
Abstract
PDF(3795KB) ( 37 )
- References | Related Articles | Metrics
-
Privacy policies serve as self-regulatory commitments by online service providers to legitimize the collection and utilization of personal information,aiming to enhance user trust and provide users with greater control over data processing.However,they face practical challenges including excessive length,technical jargon proliferation,and ambiguities in legal compliance.Traditional approaches rely on classification models that detect compliance through annotated policy texts.However,these methods suffer from oversimplified evaluation metrics,high annotation costs,and limited detection accuracy.This paper proposes a large language model(LLM)-based framework for mobile App privacy policy compliance detection,structured around three pillars:(1)establishing a multi-tier compliance evaluation system,(2)designing a hierarchical reasoning framework enhanced by Dynamic Optimal Trajectory Search(DOTS),and(3)implementing automated compliance verification.Firstly,this paper constructs a compliance evaluation system comprising 6 first-level,14 second-level,and 41 third-level indicators,grounded in nine legal frameworks including China's Civil Code and Personal Information Protection Law.Secondly,it develops the Dynamic Tri-Stage Hierarchical Compliance Evaluator(DOTS-THCE),a three-phase reasoning framework that enables few-shot prompting to guide LLMs in conducting multi-level dynamic assessments of privacy policies.Finally,it implements automated detection on the PPC-Bench dataset containing 4 821 privacy policies across 10 application categories collected from Tencent's “MyApp” store.Experimental results demonstrate that the Qwen2.5-7B-Instruct model augmented with DOTS-THCE outperforms baseline models(Deepseek-LLM-7B-Chat,Llama3.1-8B-Chinese-Chat,and GLM-4-9B-Chat) by a significant margin.The Qwen2.5-7B-Instruct@DOTS-THCE configuration achieves a macro-F1 score of 89.30%,surpassing traditional models including SVM,CNN,RNN,BERT,and Qwen2.5-7B-Instruct@RAG in terms of detection efficacy.This study not only pioneers LLM applications in privacy policy compliance detection,but also provides methodological insights for addressing data annotation scarcity in judicial AI systems.
-
Theoretical Modeling and Dynamic Analysis of Institutional Construction in Data Markets
商希雪, 韩海庭, 朱郑州, 屈秀伟. 数据市场制度建设的理论建模和动态分析[J]. 计算机科学, 2025, 52(8): 17-28.
SHANG Xixue, HAN Haiting, ZHU Zhengzhou, QU Xiuwei. Theoretical Modeling and Dynamic Analysis of Institutional Construction in Data Markets[J]. Computer Science, 2025, 52(8): 17-28. - SHANG Xixue, HAN Haiting, ZHU Zhengzhou, QU Xiuwei
- Computer Science. 2025, 52 (8): 17-28. doi:10.11896/jsjkx.250400023
-
Abstract
PDF(3998KB) ( 14 )
- References | Related Articles | Metrics
-
In the new era marked by the emergence and rapid development of technologies like artificial intelligence,data has become a core asset for enterprises and society.However,data market governance continues to face challenges such as insufficient economic incentives,difficulties in scientific quantification and evaluation,and prevalent covert infringements.Based on evolutio-nary game theory,this study constructs a tripartite game framework encompassing data providers,demanders,and regulatory platforms.Through the dynamic impact of factors including enterprise data development capabilities,public regulatory intensity,and participants'strategic choices on the evolution of data markets,it finds that enhancing enterprise data development capabilities is fundamental to activating market vitality and improving social welfare,yet it is also one of the catalysts for corporate violations,increasing public regulatory intensity can standardize market order but may simultaneously suppress innovative practices among certain enterprises.Through theoretical solutions and numerical simulations,the study not only reveals the nonlinear characteristics of factors such as regulatory efficacy and development capabilities,but also provides a critical basis for achieving scientifically quantifiable law enforcement.By implementing a dynamic regulatory mechanism and analytical model featuring a “incentive-constraint-compensation” trinity approach,market evolution patterns can be effectively predicted.Aligning with short-,medium-,and long-term market development goals,adjusting parameter settings within this “incentive-constraint-compensation” framework will enhance the scientific rigor of policy formulation and the precision of policy intensity.
-
Review of Research on Deep Learning Compiler
刘正煜, 张帆, 祁晓峰, 高彦钊, 宋怡景, 范旺. 深度学习编译器研究综述[J]. 计算机科学, 2025, 52(8): 29-44.
LIU Zhengyu, ZHANG Fan, QI Xiaofeng, GAO Yanzhao, SONG Yijing, FAN Wang. Review of Research on Deep Learning Compiler[J]. Computer Science, 2025, 52(8): 29-44. - LIU Zhengyu, ZHANG Fan, QI Xiaofeng, GAO Yanzhao, SONG Yijing, FAN Wang
- Computer Science. 2025, 52 (8): 29-44. doi:10.11896/jsjkx.250100062
-
Abstract
PDF(3106KB) ( 24 )
- References | Related Articles | Metrics
-
With the rapid development of artificial intelligence,an increasing number of neural network models and algorithms have been proposed.Meanwhile,as Moore's Law gradually loses its effectiveness,a variety of new accelerators and computer architectures have emerged,creating an urgent demand for the efficient deployment of neural network models on these novel hardware platforms.Against this backdrop,deep learning compilers have emerged.Unlike traditional compilers,deep learning compi-lers take various network models as input,use a multi-level intermediate representation design to optimize models layer by layer,and perform hardware-specific optimizations in the backend for different hardware architectures,ultimately generating optimized executable programs.This paper first introduces the general framework of deep learning compilers,including their core components and overall process.It then categorizes and discusses the various optimization techniques used in compilers,summarizing recent research progress and highlighting the key research trends in the field.Finally,this paper organizes the current stage of deep learning compiler research and looks forward to future research directions based on the current state of existing research.
-
Data-driven Analysis of Evolutionary Trends and Collaboration Patterns in Open Source AcademicAchievements
叶波甸, 高敏, 王伟, 陈阳. 数据驱动的开源学术成果演化规律与合作模式分析[J]. 计算机科学, 2025, 52(8): 45-50.
YE Bodian, GAO Min, WANG Wei, CHEN Yang. Data-driven Analysis of Evolutionary Trends and Collaboration Patterns in Open Source AcademicAchievements[J]. Computer Science, 2025, 52(8): 45-50. - YE Bodian, GAO Min, WANG Wei, CHEN Yang
- Computer Science. 2025, 52 (8): 45-50. doi:10.11896/jsjkx.250200013
-
Abstract
PDF(2190KB) ( 19 )
- References | Related Articles | Metrics
-
Open source has become a significant trend in software development,driving technological innovation and progress.Insights into current trends and collaboration models can help researchers and policymakers set reasonable goals.This paper analyzes 5 990 papers related to open source from the DBLP database,published between 1998 and 2023,to explore the evolution of open-source related studies.The analysis of publication venues,titles,and citation counts reveals two main categories of research:those focused on open-source software and those on empirical studies,with the former being more prevalent.Additionally,the collaborative relationships among researchers and countries are modeled and the findings indicate that most researchers are affiliated with universities,primarily focusing on software engineering and open-source.Furthermore,collaborations tend to be concentrated within single countries,predominantly involving developed nations.
-
Authorship Gender Recognition of Source Code Based on Multiple Mixed Features
刘泓玏, 陈娟, 付才, 韩兰胜, 郭晓威, 江帅. 基于多元混合特征的源代码作者性别属性识别[J]. 计算机科学, 2025, 52(8): 51-61.
LIU Hongle, CHEN Juan, FU Cai, HAN Lansheng, GUO Xiaowei, JIANG shuai. Authorship Gender Recognition of Source Code Based on Multiple Mixed Features[J]. Computer Science, 2025, 52(8): 51-61. - LIU Hongle, CHEN Juan, FU Cai, HAN Lansheng, GUO Xiaowei, JIANG shuai
- Computer Science. 2025, 52 (8): 51-61. doi:10.11896/jsjkx.241000073
-
Abstract
PDF(3772KB) ( 21 )
- References | Related Articles | Metrics
-
With the development of the Internet,network security is increasingly concerned,and cracking down on malicious code authors is an important part of it.At present,author identification through malicious code writing style has achieved remarkable results.However,if we want to understand the author's real information,we need to analyze his social attributes and form a perfect portrait.Gender,as the key classification index of human social attributes,is an important part of individual real information.Other social attributes are basically associated with gender characteristics,and the distinction of gender has become a necessary prerequisite for further exploring other social attributes.This study conducts an in-depth analysis of programmers' source code writing styles and summarizes 22 gender recognition association features of source code authors.Based on the author's gender re-cognition association feature,the adaptive enhancement algorithm(AdaBoost) is used to train the source code author's gender re-cognition classifier,ensuring high recognition rate while improving model robustness.At the same time,it is compared with natural language gender recognition algorithms to highlight the applicability of gender recognition features of source code authors.This study collectes a total of 115 004 and 22 700 Java and C++source code files with gender labels from Github,providing the academic community with the first research dataset with gender labels for source code authors.The proposed method shows good performance on the collected C++ and Java data sets,reaching 98% and 94% accuracy respectively.The conclusion of this study explores the mapping from source code author style to other social attributes,which is helpful to guide the further research from source code author style to other social attributes.
-
OpenRank Dynamics:Influence Evaluation and Dynamic Propagation Models for Open SourceEcosystems
赵生宇, 彭佳恒, 王伟, 黄帆. OpenRank动力学:面向开源生态的影响力评估与动态传播模型[J]. 计算机科学, 2025, 52(8): 62-70.
ZHAO Shengyu, PENG Jiaheng, WANG Wei, HUANG Fan. OpenRank Dynamics:Influence Evaluation and Dynamic Propagation Models for Open SourceEcosystems[J]. Computer Science, 2025, 52(8): 62-70. - ZHAO Shengyu, PENG Jiaheng, WANG Wei, HUANG Fan
- Computer Science. 2025, 52 (8): 62-70. doi:10.11896/jsjkx.250300005
-
Abstract
PDF(2716KB) ( 14 )
- References | Related Articles | Metrics
-
With the rapid development of the open source ecosystem,influence evaluation has become a critical tool for assessing developer contributions and project value.In open source communities,the complex heterogeneous network structures pose challenges for traditional static evaluation methods to comprehensively capture influence propagation among nodes.To address this issue,this paper proposes a OpenRank dynamic method that integrates static evaluation with dynamic propagation models to provide a multidimensional and dynamic assessment of node influence within open source communities.Firstly,the OpenRank algorithm is implemented using matrix algebra and the graph iteration method based on the Pregel framework,enabling efficient computation on both small- and large-scale networks and ensuring its scalability and adaptability.Secondly,by incorporating classic propagation models such as the Independent Cascade(IC) model,the Linear Threshold(LT) model,and the Susceptible-Infected-Recovered(SIR) model,this study analyzes influence propagation patterns,speed,and reach,addressing the limitations of traditional static evaluation methods.Experimental results demonstrate that the dynamic OpenRank method significantly outperforms traditional approaches in terms of influence propagation efficiency and reach.Additionally,it exhibits strong engineering adaptability and scalability.
-
Review on Application of Spatial-Temporal Graph Neural Network in PM2.5 ConcentrationForecasting
唐博源, 李琦. 时空图神经网络在PM2.5浓度预测中的应用综述[J]. 计算机科学, 2025, 52(8): 71-85.
TANG Boyuan, LI Qi. Review on Application of Spatial-Temporal Graph Neural Network in PM2.5 ConcentrationForecasting[J]. Computer Science, 2025, 52(8): 71-85. - TANG Boyuan, LI Qi
- Computer Science. 2025, 52 (8): 71-85. doi:10.11896/jsjkx.240700153
-
Abstract
PDF(3476KB) ( 15 )
- References | Related Articles | Metrics
-
Atmospheric fine particulate matter(PM2.5) has garnered significant public attention due to its detrimental effects on health and the environment.Achieving high-precision spatial-temporal forecasting of PM2.5 concentrations is crucial for guiding residents in effectively guarding against health risks and assisting environmental regulatory departments in formulating scientific environmental protection strategies.The study aims to explore effective methods for improving the accuracy of PM2.5 concentration predictions,especially the application potential and challenges of spatial-temporal graph neural network technology in this field.It begins by reviewing the development history of PM2.5 concentration forecasting methods,then delves into the integration of spatial-temporal graph neural networks with air quality monitoring networks,including strategies for constructing the graph.It further systematically outlines the spatial-temporal graph neural network models applied to PM2.5 concentration prediction and analyzes the main factors to consider in forecasting tasks and the design of spatial-temporal modules.Finally,from multiple dimensions such as multi-source data fusion,dynamic graph modeling,issues of data sparsity,and the absence of model evaluation standards,it comprehensively discusses the challenges faced by PM2.5 concentration prediction based on spatial-temporal graph neural networks and proposes possible directions for development.
-
Survey on Data Processing and Data Augmentation in Low-resource Language Automatic Speech Recognition
杨健, 孙浏, 张丽芳. 低资源语言自动语音识别中的数据处理与数据增强综述[J]. 计算机科学, 2025, 52(8): 86-99.
YANG Jian, SUN Liu, ZHANG Lifang. Survey on Data Processing and Data Augmentation in Low-resource Language Automatic Speech Recognition[J]. Computer Science, 2025, 52(8): 86-99. - YANG Jian, SUN Liu, ZHANG Lifang
- Computer Science. 2025, 52 (8): 86-99. doi:10.11896/jsjkx.240900009
-
Abstract
PDF(3517KB) ( 14 )
- References | Related Articles | Metrics
-
Due to the absence of transcribed speech,applying end-to-end ASR technology to low-resource language is challenging,making low-resource language ASR is a prominent research topic in NLP.Research on ASR in low-resource settings can be approached from two main aspects:data augmentation and model improvement.This paper focuses on the processing of training data in low-resource language ASR and summarizes the important research results in this field in recent years from the perspectives of data augmentation,sample processing,and feature engineering.Different types of data augmentation schemes are analyzed,and the utilization of unpaired speech and unpaired text is elaborated in detail.The feature engineering of ASR in low-resource scenarios is analyzed and summarized from different aspects such as feature extraction,embedding,and fusion.Finally,additional issues such as the construction of low-resource speech corpora are elaborated,and important directions for further research in low-resource language ASR are prospected.
-
Deep Graph Contrastive Clustering Algorithm Based on Dynamic Threshold Pseudo-label Selection
王沛, 杨希洪, 管仁祥, 祝恩. 基于动态阈值伪标签筛选的深度图对比聚类算法[J]. 计算机科学, 2025, 52(8): 100-108.
WANG Pei, YANG Xihong, GUAN Renxiang, ZHU En. Deep Graph Contrastive Clustering Algorithm Based on Dynamic Threshold Pseudo-label Selection[J]. Computer Science, 2025, 52(8): 100-108. - WANG Pei, YANG Xihong, GUAN Renxiang, ZHU En
- Computer Science. 2025, 52 (8): 100-108. doi:10.11896/jsjkx.240700112
-
Abstract
PDF(3603KB) ( 21 )
- References | Related Articles | Metrics
-
In recent years,graph neural networks have performed well in processing complex structural data,and are widely used in node classification,graph classification,link prediction and other fields.Deep graph clustering combines the powerful representation ability of GNNs with the goal of clustering algorithms to discover hidden population structures from complex graph structure data.However,the existing pseudo-label-based graph clustering algorithms often use fixed thresholds to filter samples according to categories to obtain high-confidence sample data to guide model optimization.However,the method of fixed thresholds can lead to category imbalance,which in turn affects the performance of model clustering.In order to solve the above problems,this paper proposes a contrastive clustering algorithm based on dynamic threshold pseudo-label depth map.Specifically,two multilayer perceptron(MLP) structures that do not share parameters are used to capture the latent structural features of the graph data,and the K-Means algorithm is used to obtain the clustering results.On this basis,the trust strength is introduced to dynamically adjust the threshold for obtaining pseudo-labels,and the number of high-confidence samples in each category is dynamically adjusted during the training process to alleviate the problem of category imbalance.In addition,this paper optimizes the contrastive learning strategy,improves the construction method of sample pairs,and improves the discriminant ability of the model.Experimental results show that the proposed method performs well on the six benchmark datasets,surpassing the existing methods in multiple evaluation indicators,and strongly demonstrates the effectiveness of the proposed algorithm.
-
Query Optimization Algorithm Based on Learning to Rank
余阳, 彭煜玮. 基于学习排序的查询优化算法[J]. 计算机科学, 2025, 52(8): 109-117.
YU Yang, PENG Yuwei. Query Optimization Algorithm Based on Learning to Rank[J]. Computer Science, 2025, 52(8): 109-117. - YU Yang, PENG Yuwei
- Computer Science. 2025, 52 (8): 109-117. doi:10.11896/jsjkx.250100151
-
Abstract
PDF(3206KB) ( 19 )
- References | Related Articles | Metrics
-
Query optimization is a key aspect of relational databases.In the traditional query optimization process,cardinality estimation of join and filter operations in a query is usually required in order to obtain a better execution plan.However,due to the inaccuracy of cardinality estimation,the results of query optimization are often unsatisfactory.Currently,some researches have been conducted to improve the cardinality estimation through machine learning-based methods and have made some progress.This paper finds that although these methods perform better in dealing with filtering predicates of numerical types in queries,they are ineffective for other complex filtering predicates.To address this problem,this paper proposes a query optimization algorithm based on learning to rank.The algorithm is capable of intelligently evaluating and ranking multiple execution plans for a single query to select the best plan for execution.The query optimization algorithm iteratively mines the better execution plans and collaborates with machine learning methods to finally filter out the optimal plan.Experimental results show that the proposed algorithm outperforms current learning-based query optimization algorithms on regular datasets and is more significant on complex datasets.
-
Continuously Evolution Streaming Graph Neural Network
郭虎升, 张旭飞, 孙玉杰, 王文剑. 随时间持续演化的流图神经网络[J]. 计算机科学, 2025, 52(8): 118-126.
GUO Husheng, ZHANG Xufei, SUN Yujie, WANG Wenjian. Continuously Evolution Streaming Graph Neural Network[J]. Computer Science, 2025, 52(8): 118-126. - GUO Husheng, ZHANG Xufei, SUN Yujie, WANG Wenjian
- Computer Science. 2025, 52 (8): 118-126. doi:10.11896/jsjkx.241000186
-
Abstract
PDF(4341KB) ( 12 )
- References | Related Articles | Metrics
-
Streaming graphs are widely used in practical applications,and their node and structure characteristics change dynamically with time.Although Graph Neural Network(GNN) is excellent in static graph node classification,it is difficult to apply it directly to streaming graphs,because the continuous evolution of streaming graphs will lead to information lag and omission,it is difficult for models to accurately extract streaming graph features.To solve the above challenges,the Continuously Evolving Streaming Graph Neural Network(CESGNN) is proposed to solve the node classification problem of streaming graph.Firstly,the Continuous Updates Graph Convolutional Network(CU-GCN) incrementally updates parameters to adapt to changes in the node characteristics of the streaming graph to alleviate the information lag problem.Then the Adaptive Deepening Graph Neural Network(AD-GNN) alleviates the information omission problem by decoupling the aggregation and updates operations to dig deep features of the streaming graph.CESGNN organically combines the original features,the shallow features extracted by CU-GCN,and the deep features extracted by AD-GNN to obtain a more accurate and comprehensive representation of streaming graph features.The experimental results show that CESGNN model has good adaptability and stability of streaming graph,and improves the accuracy of node classification of streaming graph.
-
Dynamic Community Detection with Hierarchical Modularity Optimization
朱瑞, 叶亚琴, 李圣文, 汤子健, 肖玥. 基于层次结构嵌入的动态社区检测[J]. 计算机科学, 2025, 52(8): 127-135.
ZHU Rui, YE Yaqin, LI Shengwen, TANG Zijian, XIAO Yue. Dynamic Community Detection with Hierarchical Modularity Optimization[J]. Computer Science, 2025, 52(8): 127-135. - ZHU Rui, YE Yaqin, LI Shengwen, TANG Zijian, XIAO Yue
- Computer Science. 2025, 52 (8): 127-135. doi:10.11896/jsjkx.240600103
-
Abstract
PDF(2457KB) ( 12 )
- References | Related Articles | Metrics
-
Serving as a powerful tool for understanding intrinsic patterns and organizational structures within complex networks,dynamic community detection unveils the evolutionary process of densely connected sets of nodes,standing as a fundamental task in disciplines such as social sciences and urban planning.In recent years,various methods based on representation learning have been applied to the field of dynamic community detection.These methods map structured nodes into low-dimensional continuous latent space by integrating network topology and evolution characteristics,achieving reliable measurement of node similarity and difference.However,existing representation learning methods inadequately consider nodes' long-range information,falling short of capturing global structural features.To address this issue,this paper proposes DHM,which combines modularity optimization and hierarchical structure embedding to capture long-range node interactions.Specifically,DHM generates hierarchical organization based on the network's multi-granularity nature and embeds different levels of node relationships into node representations through bottom-up and top-down message-passing mechanisms.Experimental results on synthetic and real-world network datasets demonstrate that DHM outperforms existing dynamic community detection algorithms in terms of normalized mutual information,adjusted Rand index,and modularity,and can effectively detect communities in temporal networks.
-
Douglas-Peuker Algorithm Based Learned Index Structure for Road Network Trajectroy Data
缪祝青, 韩京宇, 李彩云, 王彦之, 毛毅, 张怡婷. 基于道格拉斯-普克算法的路网轨迹学习索引结构[J]. 计算机科学, 2025, 52(8): 136-145.
MIAO Zhuqing, HAN Jingyu, LI Caiyun, WANG Yanzhi, MAO Yi, ZHANG Yiting. Douglas-Peuker Algorithm Based Learned Index Structure for Road Network Trajectroy Data[J]. Computer Science, 2025, 52(8): 136-145. - MIAO Zhuqing, HAN Jingyu, LI Caiyun, WANG Yanzhi, MAO Yi, ZHANG Yiting
- Computer Science. 2025, 52 (8): 136-145. doi:10.11896/jsjkx.240600075
-
Abstract
PDF(2924KB) ( 14 )
- References | Related Articles | Metrics
-
In recent years,the technology of location-based services has developed rapidly,which has produced a large amount of road network trajectory data.As a type of road network trajectory query,path range query is the basis to support other query types.This paper proposes a Douglas-Peuker based learned index structure(DPLI) based on Douglas-Purker algorithm,in order to efficiently index massive road network trajectory data and provide accurate path range query service.Firstly,the trajectory data is divided into multiple trajectory segments,then the midpoint of the trajectory segment is taken as the representation of the tra-jectory data,and the mapping function is used to map the sequence of one-dimensional mapping values,and then the trajectory data is divided into multiple data fragments according to the number of key values.In the fragment,the first and last data are formed into a line segment,and then the fitting error of the distance line segment of the remaining data points is calculated.The data points exceeding the error threshold are taken as the new line segment endpoints,and the original line segment is recursively divided until the fitting error of all data points is less than the threshold,so as to fit the piecewise linear function.The results show that compared with the traditional index methods,DPLI has faster construction efficiency and disk access efficiency.Compared to learning indexing methods,DPLI maintains the advantage of build efficiency and achieves a 100% query recall rate.
-
Efficient Indexing Method for Massive 3D Geological Block Models Based on Inverted-B+ Tree
陈根深, 刘刚, 董洋, 范文遥, 易强, 姜子鑫. 基于Inverted-B+树的海量三维地质块体模型高效索引方法[J]. 计算机科学, 2025, 52(8): 146-153.
CHEN Genshen, LIU Gang, DONG Yang, FAN Wenyao, YI Qiang, JIANG Zixin. Efficient Indexing Method for Massive 3D Geological Block Models Based on Inverted-B+ Tree[J]. Computer Science, 2025, 52(8): 146-153. - CHEN Genshen, LIU Gang, DONG Yang, FAN Wenyao, YI Qiang, JIANG Zixin
- Computer Science. 2025, 52 (8): 146-153. doi:10.11896/jsjkx.240700127
-
Abstract
PDF(5355KB) ( 7 )
- References | Related Articles | Metrics
-
The prevalence of zero or null values in three-dimensional geological block models raises maintenance costs and efficiency issues due to frequent splitting and adjustment of the B+tree-based attribute index structure.An indexing method based on Inverted-B+Tree(IBT) is proposed.This method minimizes structural adjustments during data processing by constructing an IBT index structure,creating inverted nodes for duplicate keys inserted into leaf nodes.It accelerates queries by storing interme-diate index values in internal nodes and establishing bidirectional links between leaves and inverted nodes,enabling efficient range queries via sequential access to the dataset from any leaves nodes.Six geological block models after voxelization,interpolation,and dimension reduction of geological structural models are mainly used in the experiment.Comparing the traditional B+tree,results show that IBT has great performance in terms of index construction,spatial usage,and querying efficiencies.Especially for ma-naging large amounts of data,index construction,and information query efficiency are improved by 71%,with 83% spatial usagebeing reduced,which is relatively stable and scalable for information queries of 3D geological blocks.
-
Multi-target Trajectory Generation Method Based on Motion Features
张浩然, 王桂玲. 基于运动特征的多目标航迹生成方法[J]. 计算机科学, 2025, 52(8): 154-161.
ZHANG Haoran, WANG Guiling. Multi-target Trajectory Generation Method Based on Motion Features[J]. Computer Science, 2025, 52(8): 154-161. - ZHANG Haoran, WANG Guiling
- Computer Science. 2025, 52 (8): 154-161. doi:10.11896/jsjkx.241100031
-
Abstract
PDF(3751KB) ( 13 )
- References | Related Articles | Metrics
-
In the maritime multi-target tracking context of space tracking vessels,the trajectory correlation of target ships has remained a formidable challenge.Owing to the highly dynamic nature of the oceanic environment and the irregularity as well as randomness of sea clutter,the detected target points frequently encompass a multitude of false detections.This paper presents a motion-feature-based multi-target trajectory generation approach,which comprises two crucial stages:preprocessing and trajectory segment association.In the preprocessing stage,trajectory outliers are eliminated by imposing threshold constraints on latitude,longitude,speed,and heading angle,followed by a B-spline-based sampling-segmentation-interpolation method to enhance the completeness,continuity,and smoothness of the target trajectories.In the trajectory segment association stage,a multi-target tra-jectory association strategy is formulated,integrating motion features and temporal constraints.Experimental outcomes in real maritime scenarios illustrate that the proposed method substantially enhances the accuracy and robustness of trajectory generation.
-
Clustering Algorithm Based on Improved SOM Model
蒋锐, 范姝文, 王小明, 徐友云. 基于改进SOM网络的聚类算法[J]. 计算机科学, 2025, 52(8): 162-170.
JIANG Rui, FAN Shuwen, WANG Xiaoming, XU Youyun. Clustering Algorithm Based on Improved SOM Model[J]. Computer Science, 2025, 52(8): 162-170. - JIANG Rui, FAN Shuwen, WANG Xiaoming, XU Youyun
- Computer Science. 2025, 52 (8): 162-170. doi:10.11896/jsjkx.240700017
-
Abstract
PDF(3616KB) ( 10 )
- References | Related Articles | Metrics
-
In the training process of the Self-Organizing Map,different classes of data have varying effects on the update of the weight matrix.Therefore,the update of the weight matrix for a certain class of data will have an impact on the feature vectors of the winning neurons,which are corresponding to other classes of data.This impact causes the winning neurons to deviate from the features of the data,thus reducing the clustering accuracy of the algorithm.Regarding the above issue,this paper proposes an improved confidence-based SOM model(icSOM).Firstly,the sample data is classified by the K-means algorithm to provide more information for model training.Secondly,the pre-classified data is used for training different classes SOM models to eliminate the influence caused by data from different classes.Based on the traditional SOM model,the concept of confidence matrix is then proposed.By comprehensively evaluating the confidence of the winning neurons and their Euclidean distance to the input data,the confident neuron is finally obtained.The clustering label that assigned to this input data is same as this confident neuron's class.Using icSOM for clustering analysis of the Iris dataset and the Wine dataset,the experimental results show that the proposed algorithm can handle sample data more effectively and achieve better clustering performance.
-
Text Clustering Approach Based on Key Semantic Driven and Contrastive Learning
张士举, 郭朝阳, 吴承亮, 吴凌俊, 杨丰玉. 基于关键语义驱动和对比学习的文本聚类方法[J]. 计算机科学, 2025, 52(8): 171-179.
ZHANG Shiju, GUO Chaoyang, WU Chengliang, WU Lingjun, YANG Fengyu. Text Clustering Approach Based on Key Semantic Driven and Contrastive Learning[J]. Computer Science, 2025, 52(8): 171-179. - ZHANG Shiju, GUO Chaoyang, WU Chengliang, WU Lingjun, YANG Fengyu
- Computer Science. 2025, 52 (8): 171-179. doi:10.11896/jsjkx.240700008
-
Abstract
PDF(2636KB) ( 14 )
- References | Related Articles | Metrics
-
Text clustering is the process of grouping a large amount of text data according to their similarities,which can help to understand the structure and content of text data,and discover patterns and trends in it,and is usually used in the fields of information retrieval and document management.Existing text clustering models have the problems of over-reliance on the quality of original data and insufficient extraction of key information,and data of different categories overlap each other in the representation space.To solve the above problems,a text clustering method based on key semantic-driven and comparative learning(KSD-CLTC) is proposed.In the process of data processing,a data enhancement module is used to enrich the original data to improve the generalization,and a key semantic-driven module is designed to extract keywords from the text to make up for the loss of key semantic information.In the feature extraction process,the pre-trained model and automatic encoder are used to characterize the data with high quality.Then,in the cluster learning process,the cluster loss is combined with the reconstruction loss of key semantic-driven modules to further learn the feature representation more suitable for clusters,and the contrast learning module is used to achieve better classification results.KSD-CLTC outperforms the comparative clustering algorithms on both public and industrial datasets,improving ACC by an average of 2.92% and NMI by an average of 1.99% across all datasets as compared to the state-of-the-art SCCL method.The clustering results also demonstrate the importance of key semantic drivers for text clustering.
-
Active Learning for Point Cloud Semantic Segmentation Based on Dynamic Balance and DistanceSuppression
曾欣然, 李天瑞, 李崇寿. 基于动态平衡和距离抑制的点云语义分割主动学习[J]. 计算机科学, 2025, 52(8): 180-187.
ZENG Xinran, LI Tianrui, LI Chongshou. Active Learning for Point Cloud Semantic Segmentation Based on Dynamic Balance and DistanceSuppression[J]. Computer Science, 2025, 52(8): 180-187. - ZENG Xinran, LI Tianrui, LI Chongshou
- Computer Science. 2025, 52 (8): 180-187. doi:10.11896/jsjkx.240900104
-
Abstract
PDF(4533KB) ( 13 )
- References | Related Articles | Metrics
-
In recent years,deep learning-based point cloud semantic segmentation has achieved remarkable success,but it heavily relies on a large amount of densely annotated point cloud data.In order to reduce the annotation cost,many weakly supervised learning methods have emerged,and active learning is one of them.It reduces the annotation cost by selecting a subset of the point cloud for annotation,but the current methods don't fully consider the connection between all points in the region when estimating the amount of regional information,and the previous diversity selection methods take a lot of time.To alleviate these issues,this paper proposes an active learning method for point cloud semantic segmentation based on dynamic balance and distance suppression.The method considers the connection between all points in the region by introducing regional inconsistency,and uses a dynamic balance strategy to adjust the importance of point-level uncertainty and regional inconsistency to measure the amount of regional information.In addition,a feature-normal distance suppression strategy is designed to select representative regions.This strategy uses a simpler method when considering the spatial structure between regions,which avoids redundant labeling by deleting adjacent similar regions,thereby improving the efficiency of diversity selection.Experimental results on the S3DIS and Semantic3D datasets show that the proposed framework demonstrates state-of-the-art performance and effectively reduces the annotation cost and diversity selection time.
-
MTFuse:An Infrared and Visible Image Fusion Network Based on Mamba and Transformer
丁政泽, 聂仁灿, 李锦涛, 苏华平, 徐航. MTFuse:基于Mamba和Transformer的红外与可见光图像融合网络[J]. 计算机科学, 2025, 52(8): 188-194.
DING Zhengze, NIE Rencan, LI Jintao, SU Huaping, XU Hang. MTFuse:An Infrared and Visible Image Fusion Network Based on Mamba and Transformer[J]. Computer Science, 2025, 52(8): 188-194. - DING Zhengze, NIE Rencan, LI Jintao, SU Huaping, XU Hang
- Computer Science. 2025, 52 (8): 188-194. doi:10.11896/jsjkx.240600106
-
Abstract
PDF(3789KB) ( 11 )
- References | Related Articles | Metrics
-
Infrared and visible image fusion aims to retain the thermal radiation information from infrared images and the texture details from visible images to represent the imaging scene and comprehensively promote downstream visual tasks.Fusion models based on convolutional neural networks(CNNs) encounter limitations in capturing global image features due to their focus on local convolutional operations.Although Transformer-based models excel in global feature modeling,they also face computational challenges posed by quadratic complexity.Recently,the selective structured state-space model(Mamba) has shown great potential in modeling long-range dependencies with linear complexity,providing a promising path to address the aforementioned issues.To efficiently model long-range dependencies in images,this paper designs a residual selective structured state space module(RMB) for extracting global features.Simultaneously,to model the relationship between multimodal images,a cross-modal query fusion attention module(CQAM) is designed for adaptive feature fusion.Furthermore,a loss function consisting of two terms,including gradient loss and brightness loss,is designed to train the proposed model in an unsupervised manner.Comparative experiments on fusion quality and efficiency with numerous other state-of-the-art methods and ablation studies demonstrate the effectiveness of the proposed MTFuse method.
-
IBSNet:A Neural Implicit Field for IBS Prediction in Single-view Scanned Point Cloud
袁右文, 金朔, 赵玺. IBSNet:用于估计单视角扫描点云交互平分面的神经隐式场[J]. 计算机科学, 2025, 52(8): 195-203.
YUAN Youwen, JIN Shuo, ZHAO Xi. IBSNet:A Neural Implicit Field for IBS Prediction in Single-view Scanned Point Cloud[J]. Computer Science, 2025, 52(8): 195-203. - YUAN Youwen, JIN Shuo, ZHAO Xi
- Computer Science. 2025, 52 (8): 195-203. doi:10.11896/jsjkx.240900086
-
Abstract
PDF(4313KB) ( 9 )
- References | Related Articles | Metrics
-
The analysis of spatial relationships between 3D objects is of great significance for scene understanding and interaction.For example,by analyzing the spatial relationship between the robot and the object,the robot can be guided to grasp the object more accurately.By learning the spatial relationship between objects in the real scene,it can guide the generation of virtual scenes that look more natural or better meet the needs of interaction.However,because the single-view scanned point clouds gotten by RGB-D cameras or LiDAR usually have many artifacts and noise,existing methods for analyzing the spatial relationships of objects are often difficult to make accurate predictions when faced with single-view scanned point clouds,which makes these me-thods impractical for practical applications.For handling the spatial relationship analysis of single-view scanned point clouds,this paper uses the interaction bisector surface(IBS) to express spatial relationships,and proposes a differential unsigned distance field of dual-object to implicitly represent IBS.Inspired by the implicit function learning methods widely used in recent years,this paper designs a neural implicit field to fit the differential unsigned distance field.This neural implicit field takes the single-view scanned point clouds of two objects as input and returns the different unsigned distance field of the two objects.This network uses two multi-layer self-attention point cloud encoders to extract the features of the two input point clouds and combines these features after that.Then these features are inputted into a dual-object unsigned distance decoder to get the unsigned distance va-lue of the query points.Comparative experiments of this method with other methods(Geometry Method,IMNet and Grasping Field) are conducted on the ICON dataset.It simulates single-view scans of each scene from 26 different viewpoints to get the single-view scanned point clouds and split the whole dataset into training set and test set based on a single scene.The robustness of each method is also tested when facing single-view scanning point clouds with different degrees of incompleteness and noise.Experimental results show that theproposed neural implicit field is very robust to the input single-view scanned point clouds with different degrees of incompleteness,and can efficiently predict IBS with accurate shapes.
-
Hash Image Retrieval Based on Mixed Attention and Polarization Asymmetric Loss
刘华咏, 徐明慧. 基于混合注意力与偏振非对称损失的哈希图像检索[J]. 计算机科学, 2025, 52(8): 204-213.
LIU Huayong, XU Minghui. Hash Image Retrieval Based on Mixed Attention and Polarization Asymmetric Loss[J]. Computer Science, 2025, 52(8): 204-213. - LIU Huayong, XU Minghui
- Computer Science. 2025, 52 (8): 204-213. doi:10.11896/jsjkx.240600057
-
Abstract
PDF(3752KB) ( 11 )
- References | Related Articles | Metrics
-
With the continuous development of the Internet,massive and complex image data is being created every day,so that today's mainstream social media is full of complex media data such as images.Effectively processing these image data can not only increase the utilization rate of image data but also improve the user experience.Therefore,how to retrieve images quickly and accurately has become a meaningful and urgent problem.The current mainstream hash image retrieval model is convolutional neural network model.However,the convolution operation of CNN can only capture local features,but cannot process global information,and the receptive field size of the convolution operation is fixed,it cannot adapt to input images of different scales.This paper proposes based on Swin Transformer model in Transformer model to achieve effective image retrieval.The Transformer model effectively solves the CNN problem with self-attention mechanism and location coding operation.However,the window attention module of the existing Swin-Transformer hashing image retrieval model gives the same weight to different channels of the image when extracting image features,thus ignoring the differences and dependencies of the feature information of different channels of the image,which reduces the availability of the extracted features and leads to a waste of computing resources.To solve these problems,this paper proposes hash image retrieval model based on mixed attention and polarization asymmetric loss.The model design is based on Swin-Transformer feature extraction module.The window self-attention module in HFST has been added to the channel attention block.The hash feature extraction module based on mixed attention is obtained,which enables the model to assign different weight information to the features of different channels of the input image.Increase the diversity of extracted features and maximize the use of computing resources.At the same time,in order to minimize the intra-class Hamming distance,maximize the inter-class Hamming distance,make full use of the supervision information of the data,and improve the retrieval accuracy of the image,this paper proposes polarization asymmetric loss function.The polarization loss and asymmetric loss are combined with a certain weight allocation ratio,so effectively improve the image retrieval precision.The experimental results show the validity and rationality of the proposed method.For example,when the hash coding length is 16 bits,the proposed model has a maximum average accuracy of 98.73% on the CIFAR-10 single-label dataset,which is 1.51% higher than that of the VTS16-CSQ model.The highest average retrieval accuracy mean is 90.65% on NUSWIDE multi-label dataset,which is 18.02% higher than TransHash and 5.92% higher than VTS16-CSQ model.
-
Improved RT-DETR Algorithm for Small Object Detection in Remote Sensing Images
沈涛, 张秀再, 许岱. 改进RT-DETR的遥感图像小目标检测算法[J]. 计算机科学, 2025, 52(8): 214-221.
SHEN Tao, ZHANG Xiuzai, XU Dai. Improved RT-DETR Algorithm for Small Object Detection in Remote Sensing Images[J]. Computer Science, 2025, 52(8): 214-221. - SHEN Tao, ZHANG Xiuzai, XU Dai
- Computer Science. 2025, 52 (8): 214-221. doi:10.11896/jsjkx.241000019
-
Abstract
PDF(4336KB) ( 15 )
- References | Related Articles | Metrics
-
To address the high miss rate and false detection rate of target detection algorithms in remote sensing images and the poor performance in detecting small objects,this paper proposes an improved RT-DETR target detection algorithm.To enhance the model'scapability of detecting targets of different sizes in remote sensing images,variable kernel convolution and diversified branch structures are employed to enrich multi-scale representation capabilities.To avoid the loss of small object information due to downsampling,Haar wavelet downsampling is used to retain as much feature information as possible.To prevent the loss of small object feature information during complex network iterations and pooling,the SPABC3 module is designed to enhance high-contribution information and suppress redundant information through symmetric activation functions and residual connections.Experimental results show that the improved RT-DETR algorithm achieves mAP@0.5 of 42.7% and 95.3% on the VisDrone2019 dataset and RSOD dataset,outperforming other mainstream comparison algorithms and improving the detection accuracy of small objects in remote sensing images,thereby meeting the detection requirements for small objects in remote sensing images.
-
VSRI:Visual Semantic Relational Interactor for Image Caption
刘健, 姚任远, 高楠, 梁荣华, 陈朋. VSRI:基于视觉语义关系交互的图像字幕生成方法[J]. 计算机科学, 2025, 52(8): 222-231.
LIU Jian, YAO Renyuan, GAO Nan, LIANG Ronghua, CHEN Peng. VSRI:Visual Semantic Relational Interactor for Image Caption[J]. Computer Science, 2025, 52(8): 222-231. - LIU Jian, YAO Renyuan, GAO Nan, LIANG Ronghua, CHEN Peng
- Computer Science. 2025, 52 (8): 222-231. doi:10.11896/jsjkx.240600082
-
Abstract
PDF(3588KB) ( 10 )
- References | Related Articles | Metrics
-
Image captioning is one of the key objectives of multimodal image understanding.This paper aims to generate detail-rich and accurate image caption.Currently,mainstream image captioning methods focus on the interrelationships between regions,but ignore the visual semantic relationships between regions and grids,leading to suboptimal generation results.This paper proposes a visual semantic relational interactor(VSRI) framework,which dynamically constructs visual semantic relational interactions between regions and grids to generate captions with rich scene details and accurate relationships.Specifically,first,region semantic relations are constructed by the semantic relation constructor(SRC).Then,a visual-semantic relation joint encoder(VSRJE) module is proposed to construct visual and semantic relational interactions within and between regions and grids.Finally,an adaptive bridging decoder(ABD) module is designed to dynamically balance the contributions of multi-granularity region and grid features and generate text.Experiments on the MSCOCO dataset show that the proposed VSRI significantly outperforms baselines in 7 different metrics such as BLEUs and Meteor.
-
Video Super-resolution Model Based on Implicit Alignment
王凤玲, 魏爱敏, 庞雄文, 李智, 谢景明. 基于隐式对齐的视频超分辨率模型[J]. 计算机科学, 2025, 52(8): 232-239.
WANG Fengling, WEI Aimin, PANG Xiongwen, LI Zhi, XIE Jingming. Video Super-resolution Model Based on Implicit Alignment[J]. Computer Science, 2025, 52(8): 232-239. - WANG Fengling, WEI Aimin, PANG Xiongwen, LI Zhi, XIE Jingming
- Computer Science. 2025, 52 (8): 232-239. doi:10.11896/jsjkx.240500069
-
Abstract
PDF(3634KB) ( 7 )
- References | Related Articles | Metrics
-
Video contains both intra-frame spatial correlation and inter-frame temporal correlation.When reconstructing high-re-solution video from low-resolution video,adjacent multi-frame information can be aligned to guide the current frame recovery.Deformable convolution guided by optical flow is commonly used for explicit frame-by-frame alignment,this method overcomes the instability of deformable convolution,but will affect the recovery of high-frequency information in the frame,reduce the accuracy of the alignment information and magnify artifacts.To address these issues,this paper proposes IAVSR(Implicit Alignment Video Super-Resolution),a video super-resolution model based on implicit alignment.IAVSR encodes optical flow to specific pixel positions using offset and original values,calculating pre-alignment information instead of interpolating.Deformable convolution is used to realign pre-aligned features and recover high-frequency information.Bidirectional propagation uses information from the first two frames to guide current frame recovery,while a residual network structure improves alignment accuracy and avoids excessive parameter introduction.Experimental results on the REDS4 public dataset show that IAVSR achieves 0.6 dB higher PSNR value than the benchmark models and improves model convergence speed by 20% during training.
-
Research on Hyperspectral Image Super-resolution Methods Based on Tensor Ring SubspaceSmoothing and Graph Regularization
杨飞霞, 李正, 马飞. 基于张量环子空间平滑与图正则的高光谱图像超分辨率方法研究[J]. 计算机科学, 2025, 52(8): 240-250.
YANG Feixia, LI Zheng, MA Fei. Research on Hyperspectral Image Super-resolution Methods Based on Tensor Ring SubspaceSmoothing and Graph Regularization[J]. Computer Science, 2025, 52(8): 240-250. - YANG Feixia, LI Zheng, MA Fei
- Computer Science. 2025, 52 (8): 240-250. doi:10.11896/jsjkx.240600026
-
Abstract
PDF(7145KB) ( 12 )
- References | Related Articles | Metrics
-
Regarding existing classical matrix decomposition models,they may lead to the loss of three-dimensional data structure information,especially when affected by noise pollution,resulting in a significant decrease in the quality of reconstructed images,this paper proposes a method for hyperspectral and multispectral fusion that combines subspace smoothing with graph regularization.This approach aims to achieve hyperspectral image super-resolution reconstruction by utilizing manifold structures and local smoothing characteristics,while preserving cube structure features.Firstly,the local self-similarity between spatial subspace and spectral subspace is used to construct spatial and spectral maps by tensor ring factors to mine spatial spectral manifold structure to improve the quality of reconstructed images.Secondly,the subspace smoothing regularization is introduced to promote the segmentation smoothing of the subspace of the target image.Finally,an efficient proximal alternating minimization algorithm is designed to solve the proposed model.Experiments on three commonly used experimental data sets show that the proposed model can improve the spatial details and structure while suppressing the noise to a certain extent.
-
Few-shot Video Action Recognition Based on Two-stage Spatio-Temporal Alignment
王佳, 夏英, 丰江帆. 基于两阶段时空对齐的小样本视频行为识别[J]. 计算机科学, 2025, 52(8): 251-258.
WANG Jia, XIA Ying, FENG Jiangfan. Few-shot Video Action Recognition Based on Two-stage Spatio-Temporal Alignment[J]. Computer Science, 2025, 52(8): 251-258. - WANG Jia, XIA Ying, FENG Jiangfan
- Computer Science. 2025, 52 (8): 251-258. doi:10.11896/jsjkx.240900127
-
Abstract
PDF(3353KB) ( 10 )
- References | Related Articles | Metrics
-
Few-shot video action recognition aims to construct efficient learning models using limited training samples,thereby reducing the dependence of traditional action recognition on large-scale and finely annotated datasets.At present,most few-shot learning models classify videos based on their similarity.However,due to the different spatiotemporal distributions of action instances,there is a temporal and action evolution mismatch between the query video and the supporting video,which affects the recognition performance of the model.To address this issue,a two-stage spatiotemporal alignment network TSAN is proposed to improve the alignment accuracy of video data,thereby enhancing the accuracy of few-shot video action recognition.This network adopts the basic architecture of meta learning.In the first stage,the action time alignment module ATAM is used to construct video frame pairs in tuple mode,which subdivides video actions into sub action sequences and combines them with temporal information in video data to improve the efficiency of few-shot learning.In the second stage,the action evolution alignment module AEAM,along with its time synchronization submodule TSM and spatial coordination submodule SCM,are used to calibrate the query features to match the spatiotemporal action evolution of the support set,thereby improving the accuracy of few-shot video action recognition.The experimental results on the HMDB51,UCF101,SSV2100,and Kinetics100 datasets show that the TSAN network has higher recognition accuracy compared to existing few-shot video action recognition methods.
-
Cross-lingual Information Retrieval Based on Aligned Query
李俊文, 宋雨秋, 张维彦, 阮彤, 刘井平, 朱焱. 基于对齐查询的跨语言信息检索方法[J]. 计算机科学, 2025, 52(8): 259-267.
LI Junwen, SONG Yuqiu, ZHANG Weiyan, RUAN Tong, LIU Jingping, ZHU Yan. Cross-lingual Information Retrieval Based on Aligned Query[J]. Computer Science, 2025, 52(8): 259-267. - LI Junwen, SONG Yuqiu, ZHANG Weiyan, RUAN Tong, LIU Jingping, ZHU Yan
- Computer Science. 2025, 52 (8): 259-267. doi:10.11896/jsjkx.241000055
-
Abstract
PDF(2853KB) ( 8 )
- References | Related Articles | Metrics
-
Cross-lingual Information Retrieval(CLIR) is an important information acquisition task in natural language proces-sing.Recently,LLM-based retrieval methods have gained attention and demonstrated remarkable progress in this task.However,existing unsupervised retrieval methods based on prompting large language models still insufficient in effectiveness and efficiency.To solve this problem,this paper introduces a novel CLIR method based on aligned query.Specifically,this paper adopts the “pretrain-finetune” paradigm and proposes an adaptive self-teaching encoder based on a pretrained multilingual model to guide cross-lingual retrieval learning by mono-lingual retrieval learning.This method introduces semantically aligned queries in the same language as the documents and designs an adaptive self-teaching mechanism to guide cross-lingual retrieval by leveraging the probability distribution of mono-lingual retrieval results from different linguistic perspectives.To evaluate the effectiveness and efficiency of this method,this paper conducts extensive experiments on 22 language pairs.The results demonstrate that the proposed method achieves SOTA performance in terms of MRR.In particular,this method improves average MRR by 15.45% over the sub-optimal baseline in high-resource language pairs and 18.9% over the sub-optimal baseline in low-resource language pairs.Furthermore,the method reduces training and inference times compared to LLM-based approaches and exhibits faster convergence with enhanced stability.
-
Research on Continual Social Event Classification Based on Continual Event Knowledge Network
张袁, 张胜杰, 刘利龙, 钱胜胜. 基于持续事件知识网络的持续社会事件分类研究[J]. 计算机科学, 2025, 52(8): 268-276.
ZHANG Yuan, ZHANG Shengjie, LIU Lilong, QIAN Shengsheng. Research on Continual Social Event Classification Based on Continual Event Knowledge Network[J]. Computer Science, 2025, 52(8): 268-276. - ZHANG Yuan, ZHANG Shengjie, LIU Lilong, QIAN Shengsheng
- Computer Science. 2025, 52 (8): 268-276. doi:10.11896/jsjkx.240600146
-
Abstract
PDF(2781KB) ( 13 )
- References | Related Articles | Metrics
-
With the rapid development of the Internet and the burgeoning scale of social media,social event classification(SEC) has garnered increasing attention.The existing study of SEC focuses on recognizing a fixed set of social events.However,in real-world scenarios,new social events continually emerge on social media,which suggests the necessity for a practical SEC model that can swiftly adapt to the evolving environment with incremental social events.Therefore,this paper studies a new yet crucial pro-blem defined as continual social event classification(C-SEC),where new events continually emerge in the sequentially collected social data.Accordingly,this paper proposes a novel continual event knowledge network(CEKNet) to continually learn continual event knowledge for C-SEC with continually incremental events.The proposed continual learning framework consists of two components:present event knowledge learning and past event knowledge replay.Firstly,it conducts present event knowledge learning to learn the classification of newly emerging events in the presently incoming data.Secondly,it designs past event knowledge replay with self-knowledge distillation to consolidate the learned knowledge of past events and prevent catastrophic forgetting.Comprehensive experiments on real-world social event datasets demonstrate the superiority of the proposed CEKNet for C-SEC compared with state-of-the-art methods.
-
Application of Decoupled Knowledge Distillation Method in Document-level RelationExtraction
刘乐, 肖蓉, 杨肖. 解耦知识蒸馏在文档级关系抽取中的应用[J]. 计算机科学, 2025, 52(8): 277-287.
LIU Le, XIAO Rong, YANG Xiao. Application of Decoupled Knowledge Distillation Method in Document-level RelationExtraction[J]. Computer Science, 2025, 52(8): 277-287. - LIU Le, XIAO Rong, YANG Xiao
- Computer Science. 2025, 52 (8): 277-287. doi:10.11896/jsjkx.240600050
-
Abstract
PDF(2530KB) ( 11 )
- References | Related Articles | Metrics
-
Document-level relation extraction is an important research direction in the field of natural language processing,aiming to extract semantic relationships between entities from unstructured or semi-structured natural language documents.This paper proposes a solution combining decoupled knowledge distillation and cross multi-head attention mechanisms to address the DocRE task.Firstly,the cross multi-head attention mechanism can not only simultaneously focus on elements in different attention heads,enabling the model to exchange and integrate information at different granularities and levels but also allow the model to consider the correlation between head and tail entities and their relations when calculating attention,thereby enhancing the model'sunderstanding of complex relationships and improving the learning of entity feature representations.Additionally,to further optimize the model's performance,this paper introduces a decoupled knowledge distillation method to adapt to distantly supervised data.This method decouples the original KL divergence loss into target class knowledge distillation loss(TCKDL) and non-target class knowledge distillation loss(NCKDL),which can adjust their weight importance through hyperparameters,increasing the flexibility and effectiveness of the knowledge distillation process.Particularly,it enables more precise knowledge transfer and learning when dealing with noise in the DocRED distantly supervised data.Experimental results show that the proposed model can more effectively extract relationships between entity pairs on the DocRED dataset.
-
Multimodal Multiobjective Optimization Algorithm Based on Local Center Clustering
岳彩通, 叶文豪, 张颖洁, 梁静, 林泓宇. 基于局部中心解聚类的多模态多目标优化算法[J]. 计算机科学, 2025, 52(8): 288-299.
YUE Caitong, YE Wenhao, ZHANG Yingjie, LIANG Jing, LIN Hongyu. Multimodal Multiobjective Optimization Algorithm Based on Local Center Clustering[J]. Computer Science, 2025, 52(8): 288-299. - YUE Caitong, YE Wenhao, ZHANG Yingjie, LIANG Jing, LIN Hongyu
- Computer Science. 2025, 52 (8): 288-299. doi:10.11896/jsjkx.240700094
-
Abstract
PDF(3393KB) ( 15 )
- References | Related Articles | Metrics
-
In multimodal multiobjective optimization problems,multiple global and local optimal solutions can provide flexible options for decision makers.However,the current research work of multimodal multiobjective algorithms mostly focuses on multiple equivalent global Pareto optimal sets,ignoring the local Pareto optimal sets with the same value.Based on the above problems,a multimodal multiobjective optimization algorithm based on local center clustering is proposed.The algorithm locates as many optimal regions as possible through the selection strategy of the local central solution,and then designs two different search stra-tegies according to different exploration conditions of the population in the optimal region,so that the population can choose the mutation strategy adaptively according to its own conditions.Thus,each optimal region can be explored well.The proposed algorithm is tested on the CEC2020 multimodal multiobjective benchmark function.The results show that the proposed evolutionary algorithm performs well in solving problems with multiple global Pareto sets and both global and local Pareto sets.
-
Cross-domain Aspect-based Sentiment Analysis Based on Pre-training Model with Data Augmentation
陈舸, 王中卿. 结合预训练模型和数据增强的跨领域属性级情感分析研究[J]. 计算机科学, 2025, 52(8): 300-307.
CHEN Ge, WANG Zhongqing. Cross-domain Aspect-based Sentiment Analysis Based on Pre-training Model with Data Augmentation[J]. Computer Science, 2025, 52(8): 300-307. - CHEN Ge, WANG Zhongqing
- Computer Science. 2025, 52 (8): 300-307. doi:10.11896/jsjkx.240900114
-
Abstract
PDF(2318KB) ( 22 )
- References | Related Articles | Metrics
-
Aspect-based Sentiment Analysis(ABSA) is a fine-grained sentiment analysis task,which aimes at identifying specific aspects in text and exploring their sentiment orientation.To solve the problem of poor performance of ABSA model due to its in-ability to adapt to different domain language styles and lack of labeled data in target domain,this paper proposes a cross-domain aspect-based sentiment analysis method combined with pre-trained model.The pretraining model is used to generate labels for the target domain text,and the large language model is used to regenerate natural sentences with more target domain style.Finally,the generated samples and source domain samples are combined for training to predict the target domain.This experimental results on the restaurant and laptop datasets from the SemEval corpus,as well as a publicly available Web service review dataset show that,compared to existing cross-domain sentiment analysis methods,the proposed method achieves at least a 5.33% improvement in F1 score,fully demonstrating its effectiveness.
-
Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph
王东升. 基于多轮LLM和犯罪知识图谱的多被告人法律判决预测[J]. 计算机科学, 2025, 52(8): 308-316.
WANG Dongsheng. Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph[J]. Computer Science, 2025, 52(8): 308-316. - WANG Dongsheng
- Computer Science. 2025, 52 (8): 308-316. doi:10.11896/jsjkx.240900170
-
Abstract
PDF(3167KB) ( 13 )
- References | Related Articles | Metrics
-
Some studies use advanced Large Language Model(LLM) technologies to understand legal facts and predict the defendant's charges,prison term and other judgment results.For further in-depth research,this paper chooses the more complex task of predicting legal judgments for multiple defendants,which is more challenging than predicting for a single defendant.Specifically,upgrading the interaction with LLM from a single-turn to multi-turn process to enhance LLM's understanding of criminal cases.In addition, two types of crime Knowledge Graphs(KGs) are construted to describe the case.The criminal relationship knowledge graph depicts the relationships of assistance between the defendants,while the sentencing circumstance knowledge graph represents the core criminal details of the case.Through crime knowledge graphs,a retrieval system is designed to provide LLM with references for similar case judgments.In experiments on predicting legal judgments for multiple defendants, the prediction results of the proposed method are better than the comparison methods,which shows that the designs of multi-turn LLM interactions and crime knowledge graphs are effective.
-
Congestion-aware and Cached Communication for Multi-agent Pathfinding
张永良, 李子文, 许家豪, 江雨宸, 崔滢. 基于拥塞感知和缓存通信的多智能体路径规划[J]. 计算机科学, 2025, 52(8): 317-325.
ZHANG Yongliang, LI Ziwen, XU Jiahao, JIANG Yuchen, CUI Ying. Congestion-aware and Cached Communication for Multi-agent Pathfinding[J]. Computer Science, 2025, 52(8): 317-325. - ZHANG Yongliang, LI Ziwen, XU Jiahao, JIANG Yuchen, CUI Ying
- Computer Science. 2025, 52 (8): 317-325. doi:10.11896/jsjkx.240900012
-
Abstract
PDF(2811KB) ( 10 )
- References | Related Articles | Metrics
-
Multi-agent Path Finding(MAPF) is an essential component of large-scale robotic systems.Traditional planners based on conflict search are limited in scalability due to computation time,whereas multi-agent reinforcement learning strategies based on communication mechanisms significantly alleviate this issue.As task complexity and scale increase,how to effectively communicate and avoid congestion becomes significant obstacles for learning-based methods.To overcome these challenges,this paper proposes a decentralized planning method called Congestion-Aware and Cached Communication for Multi-agent Pathfinding(C3MAP),which features cache-based communication and congestion-awareness capabilities.Specifically,agents broadcast communications to neighbors only when the current environmental information significantly differs from the previous communication or when receiving request signals from other agents.Additionally,congestion information is incorporated as locally observable information to guide agents in avoiding congested areas.Experimental results on benchmarks indicate that C3MAP achieves a solution success rate of over 90% in structured scenarios,significantly outperforming existing learning-based methods.Additionally,experiments in large-scale environments confirm the greater stability of the caching communication mechanism and the effectiveness of the congestion awareness strategy.
-
Multi-UAV Path Planning Algorithm Based on Improved Dueling-DQN
付文浩, 葛礼勇, 汪文, 张淳. 基于改进Dueling-DQN的多无人机路径规划算法[J]. 计算机科学, 2025, 52(8): 326-334.
FU Wenhao, GE Liyong, WANG Wen, ZHANG Chun. Multi-UAV Path Planning Algorithm Based on Improved Dueling-DQN[J]. Computer Science, 2025, 52(8): 326-334. - FU Wenhao, GE Liyong, WANG Wen, ZHANG Chun
- Computer Science. 2025, 52 (8): 326-334. doi:10.11896/jsjkx.240600104
-
Abstract
PDF(3634KB) ( 8 )
- References | Related Articles | Metrics
-
To address the problem of path planning for multiple unmanned aerial vehicles(UAVs) in three-dimensional unknown obstacle environments when pursuing dynamic targets,this paper proposes a path planning algorithm based on an improved due-ling deep Q network(Dueling-DQN) combined with the artificial potential field method and deep reinforcement learning algorithm.This is aimed at solving the problem of path planning for multiple UAVs cooperating to capture dynamic targets.Firstly,it incorporates the idea of the artificial potential field method into the training reward function for multiple UAVs cooperating to capture dynamic targets,which not only addresses the shortcomings of traditional artificial potential field methods in complex environments where they are prone to local optima,but also solves the problems of multi-UAV cooperation and UAV obstacle avoidance in complex environments.Additionally,to facilitate better cooperation among UAVs in capturing dynamic targets,a strategy for the capture and escape of dynamic targets by multiple UAVs is designed.Simulation results demonstrate that compared to Dueling-DQN algorithm,the proposed APF-Dueling-DQN algorithm effectively reduces the probability of collisions du-ring UAV trajectory planning tasks and shortens the planned path length required to capture dynamic targets.
-
Cubic+:Enhanced Cubic Congestion Control for Cross-datacenter Networks
龙铁, 肖甫, 樊卫北, 何昕, 王俊昌. Cubic+:用于跨数据中心网络的改进Cubic拥塞控制算法[J]. 计算机科学, 2025, 52(8): 335-342.
LONG Tie, XIAO Fu, FAN Weibei, HE Xin, WANG Junchang. Cubic+:Enhanced Cubic Congestion Control for Cross-datacenter Networks[J]. Computer Science, 2025, 52(8): 335-342. - LONG Tie, XIAO Fu, FAN Weibei, HE Xin, WANG Junchang
- Computer Science. 2025, 52 (8): 335-342. doi:10.11896/jsjkx.250100056
-
Abstract
PDF(3165KB) ( 7 )
- References | Related Articles | Metrics
-
Cross-datacenter networks connect data center networks(DCNs) across regions via wide-area networks(WANs),supporting distributed applications to deliver high-quality services.However,differences in buffer sizes and round-trip times between DCNs and WANs challenge the Cubic algorithm,leading to inaccurate rate reductions,high packet loss,and poor compatibility with other algorithms.To address these challenges,this paper proposes Cubic+,an improved version of Cubic that adapts to different congestion patterns.Specifically,Cubic+ integrates delay,ECN(Explicit Congestion Notification),and packet loss signals.Cubic+ adapts to shallow-buffered switch congestion by periodically emptying queues and quickly reduces packet backlogs for deep-buffered routers.Large-scale NS3 simulations show Cubic+ reduces average flow completion time by up to 20.77% and 99th percentile completion time by 15.88%,offering a high-throughput solution for cross-datacenter networks.
-
SDN-based Integrated Communication and Storage Edge In-network Storage Node Selection Method
叶苗, 王珏, 蒋秋香, 王勇. 基于SDN的通存一体化边缘在网存储节点选择方法[J]. 计算机科学, 2025, 52(8): 343-353.
YE Miao, WANG Jue, JIANG Qiuxiang, WANG Yong. SDN-based Integrated Communication and Storage Edge In-network Storage Node Selection Method[J]. Computer Science, 2025, 52(8): 343-353. - YE Miao, WANG Jue, JIANG Qiuxiang, WANG Yong
- Computer Science. 2025, 52 (8): 343-353. doi:10.11896/jsjkx.240900174
-
Abstract
PDF(3580KB) ( 10 )
- References | Related Articles | Metrics
-
In conventional edge-distributed storage systems,data is stored across multiple edge servers,where transmission latency is constrained by the distance to the edge servers,and network communication management and configuration between server nodes lack flexibility.Data transfer for edge storage is affected by factors such as bandwidth,throughput,and network failures.Moreover,traditional systems often consider only storage node capacity when selecting storage locations,overlooking the impact of edge network load and storage node load on data storage efficiency.To address these issues,this paper designs a converged edge in-network storage architecture,integrating the flexibility of Software-Defined Networking(SDN) with the efficiency of the Server Message Block(SMB) protocol.The architecture stores data generated within the edge network on certain network forwarding nodes,and the prototype system is implemented through a custom-developed edge-converged network switch.Firstly,a custom SDN switch,coupled with storage functionality,is developed to serve as an in-network storage node,allowing data to be stored on these network forwarding nodes to effectively reduce data transmission latency.Then,using SDN technology to acquire real-time network status and storage node information,dynamic optimization of network transmission is achieved,alleviating the complexity of network configuration and management.Based on this data,a multi-attribute decision-making model for data storage node selection is established,along with a hierarchical analytical algorithm that considers both network and node states for in-network storage placement.Finally,experimental results demonstrate that,compared to conventional data storage methods in edge-distributed storage systems,the designed and implemented converged edge in-network storage system offers more flexible network management and configuration,significantly reducing data storage latency.
-
Effective Task Offloading Strategy Based on Heterogeneous Nodes
范兴刚, 姜新阳, 谷文婷, 徐骏涛, 杨友东, 李强. 基于异构节点的高效任务卸载策略[J]. 计算机科学, 2025, 52(8): 354-362.
FAN Xinggang, JIANG Xinyang, GU Wenting, XU Juntao, YANG Youdong, LI Qiang. Effective Task Offloading Strategy Based on Heterogeneous Nodes[J]. Computer Science, 2025, 52(8): 354-362. - FAN Xinggang, JIANG Xinyang, GU Wenting, XU Juntao, YANG Youdong, LI Qiang
- Computer Science. 2025, 52 (8): 354-362. doi:10.11896/jsjkx.240700088
-
Abstract
PDF(2788KB) ( 10 )
- References | Related Articles | Metrics
-
In vehicular edge computing(VEC),how to use the limited network resources to implement efficient task unloading is a research hotspot in recent years.This paper focuses on task offloading in the heterogeneous node mode and designs an efficient task offloading strategy in heterogeneous node mode(TOS-HN).When a vehicle generates a task,mobile node offloading is prio-ritized to offload the task to a nearby idle vehicle.If the mobile offloading cannot meet the task requirements,a fixed node offloa-ding strategy is adopted.In the mobile node offloading mode,the cost matrix is first constructed based on the task processing delay and energy consumption,and then the Hungarian algorithm is used to determine the optimal matching between the task vehicle and the processing vehicle.Simulation experiment proves that the TOS-HN algorithm has significant advantages over other algorithms,with better performance in terms of delay,energy consumption,task success rate and base station load.
-
Motion-angle-based Video Frame Deletion Detection Algorithm and Its Evidentiary Validity Standards
王康庆, 夏立款, 李硕. 基于运动角度的视频帧删除检测算法及其证据效力规范[J]. 计算机科学, 2025, 52(8): 363-373.
WANG Kangqing, XIA Likuan, LI Shuo. Motion-angle-based Video Frame Deletion Detection Algorithm and Its Evidentiary Validity Standards[J]. Computer Science, 2025, 52(8): 363-373. - WANG Kangqing, XIA Likuan, LI Shuo
- Computer Science. 2025, 52 (8): 363-373. doi:10.11896/jsjkx.250500051
-
Abstract
PDF(4752KB) ( 11 )
- References | Related Articles | Metrics
-
In recent years,malicious video tampering has become increasingly prevalent,posing severe challenges to the authenti-city and reliability of electronic evidence.Among such tampering methods,video frame deletion,which can obscure factual truth,proves particularly destructive to video-based electronic evidence.Consequently,frame deletion detection has attracted growing research attention.Current mainstream detection methods primarily rely on identifying content continuity degradation in the temporal domain caused by frame deletion.However,the complexity of temporal information in videos makes such temporal continuity-based detection approaches unstable.To address this issue,this paper focuses on motion patterns of objects in videos.By establi-shing a first-order Markov model,it derives frequency-domain Markov continuity decay traces.Subsequently,based on these frequency-domain traces,this paper proposes a video frame deletion detection algorithm utilizing time-frequency analysis techniques.Experimental results demonstrate that compared with temporal continuity decay traces,the frequency-domain continuity decay-based detection algorithm exhibits superior forensic performance.Building upon this technical advancement,this research further constructs a legal framework from perspectives of evidence legality,evidence authenticity,and evidence relevance,providing theoretical references for improving electronic evidence regulations in the digital era.This dual approach achieves both technological justice and procedural justice objectives.
-
Cross-domain Graph Anomaly Detection Via Dual Classification and Reconstruction
苏世玉, 于炯, 李姝, 酒世承. 基于双重分类和重建的跨域图异常检测[J]. 计算机科学, 2025, 52(8): 374-384.
SU Shiyu, YU Jiong, LI Shu, JIU Shicheng. Cross-domain Graph Anomaly Detection Via Dual Classification and Reconstruction[J]. Computer Science, 2025, 52(8): 374-384. - SU Shiyu, YU Jiong, LI Shu, JIU Shicheng
- Computer Science. 2025, 52 (8): 374-384. doi:10.11896/jsjkx.241000140
-
Abstract
PDF(2992KB) ( 10 )
- References | Related Articles | Metrics
-
Cross-domain graph anomaly detection improves the accuracy of detecting anomalous nodes by leveraging labeled source graphs to assist in detecting anomalies in unlabeled target graphs,effectively reducing the high false positive rate in unsupervised detection.Aligning features between source and target graphs remain challenging due to the complex relationships between graph topology and node attributes,and the diversity of anomalous nodes further complicates detection.To address this,a Dual Classification and Reconstruction Network(DCRN) is proposed.DCRN employs a reconstruction-based strategy for domain adaptation,optimizing shared structure and attribute encoders,anomaly classifiers,and decoders.This enables the model to capture complex topological and attribute relationships between source and target graphs,achieving effective feature alignment and knowledge transfer.DCRN combines classifier and decoder results to identify both shared and unique anomalies in the target graph,enhancing detection accuracy and robustness.Experiments on four real-world datasets show that DCRN outperforms 10 baseline algorithms,with an average improvement of 4.5% in AUC-ROC,20.5% in AUC-PR,and a 16.13% reduction in FAR,demonstrating its effectiveness in detecting anomalous nodes in target graphs.
-
Proxy-based Bidirectional Coin Mixing Mechanism of Blockchain
冯艺萌, 冯雁, 谢四江, 张青. 基于代理人的区块链双向混币协议[J]. 计算机科学, 2025, 52(8): 385-392.
FENG Yimeng, FENG Yan, XIE Sijiang, ZHANG Qing. Proxy-based Bidirectional Coin Mixing Mechanism of Blockchain[J]. Computer Science, 2025, 52(8): 385-392. - FENG Yimeng, FENG Yan, XIE Sijiang, ZHANG Qing
- Computer Science. 2025, 52 (8): 385-392. doi:10.11896/jsjkx.240600079
-
Abstract
PDF(2475KB) ( 7 )
- References | Related Articles | Metrics
-
Aiming at the situation that blockchain transaction mapping analysis may leak users' privacy and the third-party mi-xing service providers are not trustworthy,this paper proposes an agent-based bidirectional mixing protocol PBShuffle without the need of a third party.The protocol process does not require the participation of a third-party mixing service provider,and it adopts the method of delivering the output address to the aggregated users through an agent.The agent is randomly selected by the participant among all participants and needs to perform two rounds of mixing to deliver output addresses to two aggregated users respectively.The protocol utilizes double encryption to achieve privacy protection in the process of output address delivery,the agent can only decrypt the encrypted message encrypted with the public key of the aggregated user,and the aggregated user can only know that the message is delivered by the agent,and cannot derive the source participant of the message.The protocol is theoretically analyzed to be highly secure in terms of non-connectivity,verifiability and robustness.Comparison experiments with CoinShuffle show that PBShuffle has higher efficiency and lower overhead in the case of a larger number of participating users,and is more suitable for practical applications.
-
Super Spreader Detection Algorithm Based on Adaptive Sampling
孙靖宇, 黄河, 孙玉娥, 张博宇. 基于自适应采样的超级传播者检测算法[J]. 计算机科学, 2025, 52(8): 393-402.
SUN Jingyu, HUANG He, SUN Yu'e, ZHANG Boyu. Super Spreader Detection Algorithm Based on Adaptive Sampling[J]. Computer Science, 2025, 52(8): 393-402. - SUN Jingyu, HUANG He, SUN Yu'e, ZHANG Boyu
- Computer Science. 2025, 52 (8): 393-402. doi:10.11896/jsjkx.240900085
-
Abstract
PDF(3074KB) ( 10 )
- References | Related Articles | Metrics
-
In high-speed network environments,super-spreaders are defined as hosts or devices with a large number of connections.Accurate super spreader detection is crucial for many applications such as network monitoring,security analysis,and traffic management.Invertible algorithms based on sketches have received extensive attention due to their excellent memory efficiency and the ability to directly recover the super spreader identity from the internal structure.According to application interests,the packets sent or received from the same host or device are abstracted as a flow.The flow distribution in high-speed network is usually highly skewed,only a few flows are large flows,and the vast majority are small flows.However,the memory structure design of the existing research cannot efficiently adapt to the highly skewed flow distribution,which makes the memory resource utilization low.Therefore,this paper designs an adaptive sampling based super spreader detection algorithm AS-SSD.The algorithm proposes an adaptive sampling strategy based on register sharing,which makes up for the above shortcomings.AS-SSD first maps the elements of all flows into a register array.Small flows only consume a few registers,while larger flows consume more,which adapts to the skewed flow distribution.Then,AS-SSD uses an adaptive sampling strategy to dynamically adjust the element sampling probability of flows of different sizes,which reduces the occupation of registers by large flows and further improves the utilization efficiency of memory resources under the premise of ensuring accuracy.Experimental evaluations show that compared with the previous work,AS-SSD shows higher detection accuracy in the super spreader detection task while maintaining high throughput processing capability.Compared with the most advanced algorithms,it can increase the F1 value by more than 0.609.
-
Linear Interpolation Method for Adversarial Attack
陈军, 周强, 鲍蕾, 陶卿. 一种基于线性插值的对抗攻击方法[J]. 计算机科学, 2025, 52(8): 403-410.
CHEN Jun, ZHOU Qiang, BAO Lei, TAO Qing. Linear Interpolation Method for Adversarial Attack[J]. Computer Science, 2025, 52(8): 403-410. - CHEN Jun, ZHOU Qiang, BAO Lei, TAO Qing
- Computer Science. 2025, 52 (8): 403-410. doi:10.11896/jsjkx.240700058
-
Abstract
PDF(2389KB) ( 13 )
- References | Related Articles | Metrics
-
Deep neural networks exhibit significant vulnerability in the face of adversarial examples and are prone to attacks.The construction of adversarial examples can be abstracted as an optimization problem that maximizes the objective function.How-ever,gradient-based iterative methods often face convergence challenges when dealing with such optimization problems.These methods primarily rely on the gradient sign for iterative updates,neglecting the magnitude and direction information of the gra-dient,which can lead to algorithm instability.Studies have shown that the I-FGSM adversarial attack algorithm originates from the stochastic projection subgradient method in the field of optimization.Literature has indicated that in optimization problems,using linear interpolation methods to replace stochastic projection subgradient methods can achieve superior performance.Based on this,this paper proposes a novel linear interpolation-based adversarial attack method,which applies the interpolation strategy to adversarial attacks and replaces the traditional sign gradient with the actual gradient.Theoretically,the proposed linear interpolation adversarial attack algorithm is proved can achieve the optimal individual convergence rate in general convex optimization pro-blems,thereby overcoming the convergence difficulties of sign gradient-based algorithms.Experimental results confirm that the linear interpolation method,as a universal and efficient strategy,when combined with gradient-based adversarial attack methods,can form new attack algorithms.Compared to the original algorithms,these new algorithms significantly increase the success rate of attacks while maintaining the imperceptibility of adversarial examples and exhibit high stability during the iterative process.
-
State-decomposition Distributed Dual Averaging Algorithm for Privacy Online ConstrainedOptimization over Directed Networks
代祥光, 贺成龙, 管明宇, 张伟, 周炀, 刘建峰, 吕庆国. 有向网络下隐私在线约束优化问题的状态分解分布式对偶平均算法[J]. 计算机科学, 2025, 52(8): 411-420.
DAI Xiangguang, HE Chenglong, GUAN Mingyu, ZHANG Wei, ZHOU Yang, LIU Jianfeng, LYU Qingguo. State-decomposition Distributed Dual Averaging Algorithm for Privacy Online ConstrainedOptimization over Directed Networks[J]. Computer Science, 2025, 52(8): 411-420. - DAI Xiangguang, HE Chenglong, GUAN Mingyu, ZHANG Wei, ZHOU Yang, LIU Jianfeng, LYU Qingguo
- Computer Science. 2025, 52 (8): 411-420. doi:10.11896/jsjkx.250300083
-
Abstract
PDF(3045KB) ( 11 )
- References | Related Articles | Metrics
-
This paper investigates a class of distributed online constrained optimization problems with a common constraint set.In this setting,nodes in the network collaborate to solve the problem through local computation and communication.Each node can only access its own local loss function,whose value depends on its decision variables at each iteration.However,since nodes continuously broadcast information related to their private data during communication,most existing algorithms face the risk of privacy leakage.To address this issue,this paper proposes an efficient state decomposition-based distributed dual averaging algorithm.This algorithm integrates state decomposition and gradient adjustment strategies to enhance privacy protection while mitigating imbalances in directed networks.Notably,it does not require additional hidden signals or significantly increase computational complexity.Theoretical analysis shows that the proposed method achieves the desired sublinear regret while effectively preserving the privacy of each node's loss function.Furthermore,simulation experiments confirm the convergence and feasibility of the algorithm.