
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK


-
Survey of Open-source Software Component Vulnerability Detection and Automatic RepairTechnology
张旭明, 史涯晴, 黄松, 王兴亚, 胡津昌, 陆江涛. 开源软件组件漏洞检测与自动修复技术研究综述[J]. 计算机科学, 2025, 52(6): 1-20.
ZHANG Xuming, SHI Yaqing, HUANG Song, WANG Xingya, HU Jinchang, LU Jiangtao. Survey of Open-source Software Component Vulnerability Detection and Automatic RepairTechnology[J]. Computer Science, 2025, 52(6): 1-20. - ZHANG Xuming, SHI Yaqing, HUANG Song, WANG Xingya, HU Jinchang, LU Jiangtao
- Computer Science. 2025, 52 (6): 1-20. doi:10.11896/jsjkx.240400023
-
Abstract
PDF(2809KB) ( 314 )
- References | Related Articles | Metrics
-
The unstoppable trend of software componentization and the development process of collaborative work mode by multiple individuals have led to the gradual formation of open-source software supply chains.While flourishing open-source software brings great convenience,open-source vulnerabilities quietly emerge along the supply chain,posing a threat to the security and reliability of software systems.Software composition analysis technology can provide substantial support for maintaining the security of fragile open-source software supply chains.It discovers and fixes potential vulnerabilities and risks in software projects through two core functions:open-source software component vulnerability detection and automatic repair.To deepen the understanding of relevant researchers on the practical application of software component analysis in security vulnerabilities,this paper reviews and summarizes the research progress and achievements in recent years on open-source software component vulnerability detection and automatic repair technologies.Furthermore,starting from the users' perspective,a summary and analysis of eight common software composition analysis tools based on eight analysis dimensions are provided.Finally,the existing challenges of open-source software component vulnerability detection and automatic repair technology are discussed,and their possible future development directions are explored.
-
Modeling Mechanism and Review of Imperfect Debugging Reliability Model Related to the Total Number of Faults in Software
张策, 孙智超, 纪可行, 王金勇, 王宇彬. 软件中总故障个数相关的不完美排错可靠性模型建模机理与述评[J]. 计算机科学, 2025, 52(6): 21-34.
ZHANG Ce, SUN Zhichao, JI Kexing, WANG Jinyong, WANG Yubin. Modeling Mechanism and Review of Imperfect Debugging Reliability Model Related to the Total Number of Faults in Software[J]. Computer Science, 2025, 52(6): 21-34. - ZHANG Ce, SUN Zhichao, JI Kexing, WANG Jinyong, WANG Yubin
- Computer Science. 2025, 52 (6): 21-34. doi:10.11896/jsjkx.240300061
-
Abstract
PDF(5252KB) ( 147 )
- References | Related Articles | Metrics
-
In the reliability research,the total number of software faults is crucial for allocating testing resources,assessing reliability changes,and determining optimal release times.However,there has been limited research from the perspective of total faults counts thus far.This study delves deeply into reliability growth models related to the total number of faults in software,particularly in environments that closely mimic real testing scenarios,including imperfect debugging.Initially,the study reviews the software reliability growth model(SRGM),outlining its main themes,essence,and technical content,and introduces analysis of total failures in software.It incorporates models that introduce new faults from the perspective of imperfect debugging,and establishes an imperfect debugging model to categorize the dynamics of total failures and cumulative detected failures under various conditions.Subsequently,from the perspectives of imperfect debugging and the introduction of new faults,a unified binary first-order imperfect debugging system of differential equations is developed to describe the software testing process.Solutions are derived for expressions of total failures and cumulative detected failures.The performance of these models is validated against multiple real-world computer engineering system faults datasets,analyzing their fit and predictive capabilities,thereby assessing the impact of total failures on reliability variations.The results indicate that the total number of failures significantly influences the reliability models and supports the growth and performance enhancement of reliability.Finally,this paper highlights forthcoming research challenges and pressing issues that need to be addressed.
-
Flow-sensitive Coding Style Checking for C/C++ Programs
胡梦泽, 马旭桐, 张豪, 张健. 流敏感的C/C++程序编程风格检查方法[J]. 计算机科学, 2025, 52(6): 35-43.
HU Mengze, MA Xutong, ZHANG Hao, ZHANG Jian. Flow-sensitive Coding Style Checking for C/C++ Programs[J]. Computer Science, 2025, 52(6): 35-43. - HU Mengze, MA Xutong, ZHANG Hao, ZHANG Jian
- Computer Science. 2025, 52 (6): 35-43. doi:10.11896/jsjkx.240300195
-
Abstract
PDF(1789KB) ( 165 )
- References | Related Articles | Metrics
-
C/C++ programming languages are applied in numerous critical software systems,and there is an extremely high demand for standardization and clarity of semantics during development.To prevent potential security issues arising from improper use of C/C++ languages,aC/C++ Language Programming Security Subset(referred to as GJB8114) was proposed domestically.Given the abundance of rules within the standard,it's inevitable that programmers may deviate from these norms,thereby necessitating automated rule detection tools to identify such non-compliant coding practices.However,existing rule checking tools do not provide comprehensive checks against the standards,especially for rules that require understanding of the program's context,leading to high false positive rates or even a lack of support for certain checks.This paper categorizes the rules in GJB8114 and defines what constitutes a complex rule.Through evaluating the Testbed tool's capability to inspect complex rules within GJB8114,it identifies that current tools lack thorough flow-sensitive analysis and are unable to perform cross-file global analysis.To address these issues,this study adops a flow-sensitive analysis method combined with syntax tree matching and a cross-file global analysis approach.Based on this,the CruletFS tool is developed.Experimental results demonstrate that CruletFS performs better in checking complex rules compared to common rule checking tools,such as Cppcheck and Testbed.In analyzing large-scale projects,CruletFS also outperforms Cppcheck in terms of time and memory overhead.
-
Loop-invariant Code Motion Algorithm Based on Loop Cost Analysis
姜军, 翟彦河, 曾志恒, 顾轶超, 黄亮明. 基于循环代价分析的循环不变量外提算法[J]. 计算机科学, 2025, 52(6): 44-51.
JIANG Jun, ZHAI Yanhe, ZENG Zhiheng, GU Yichao, HUANG Liangming. Loop-invariant Code Motion Algorithm Based on Loop Cost Analysis[J]. Computer Science, 2025, 52(6): 44-51. - JIANG Jun, ZHAI Yanhe, ZENG Zhiheng, GU Yichao, HUANG Liangming
- Computer Science. 2025, 52 (6): 44-51. doi:10.11896/jsjkx.240300166
-
Abstract
PDF(1604KB) ( 161 )
- References | Related Articles | Metrics
-
Loop-invariant code motion(LICM) is a commonly used compilation optimization algorithm for loop structures in programs.By moving the invariant calculations in the loop body to outside the loop,the algorithm reduces the overhead of duplicate calculations,thus improving program execution speed.However,in LLVM compiler,the traditional LICM algorithm hoists all loop-invariants outside the loop body,which will lead to register overflow when the number of loop-invariant reaches a certain level.It will introduce additional memory access cost in the loop,resulting in a negative optimization effect on the loop.To address this issue,a loop cost analysis algorithm is introduced based on the traditional LLVM LICM algorithm.This algorithm evaluates the running cost of loop-invariant code inside the loop and the overflow cost that may be caused by moving the code outside the loop,and assesses the benefits of moving the code outside the loop.Only the loop-invariant code that produces positive benefits is moved outside the loop,effectively reducing the overhead of duplicate calculations in the loop while avoiding the risk of introducing additional costs.Theproposed new optimization algorithm achieves more than 17% performance improvement compared to the traditional LICM algorithm in typical use cases for the domestic SW831 processor platform under millions of loops.Comprehensive optimization effect evaluations are conducted using the SPEC CPU 2017 benchmark test suite(SPECspeed2017 Integer Suite),Perl interpreter DKbench benchmark test suite,and Python interpreter pyperformance benchmark test suite.The results show that compared with the traditional LICM algorithm,the proposed algorithm has 0.4%,0.63% and 1% performance improvement respectively.
-
Graph Neural Network Defect Prediction Method Combined with Developer Dependencies
乔羽, 徐涛, 张亚, 文凤鹏, 李强伟. 结合开发者依赖的图神经网络缺陷预测方法[J]. 计算机科学, 2025, 52(6): 52-57.
QIAO Yu, XU Tao, ZHANG Ya, WEN Fengpeng, LI Qiangwei. Graph Neural Network Defect Prediction Method Combined with Developer Dependencies[J]. Computer Science, 2025, 52(6): 52-57. - QIAO Yu, XU Tao, ZHANG Ya, WEN Fengpeng, LI Qiangwei
- Computer Science. 2025, 52 (6): 52-57. doi:10.11896/jsjkx.240700119
-
Abstract
PDF(1476KB) ( 111 )
- References | Related Articles | Metrics
-
In the software development process,timely identification and handling of high-risk defect modules are crucial.Traditional software defect prediction methods primarily rely on code-related information but often overlook the impact of developers' personal characteristics on software quality.To address this issue,this study proposes a novel software defect prediction model,DCN4SDP,which incorporates a developer consistency dependency network.This model first constructs a developer consistency dependency network using developer information and extracts code-related metrics as initial features for the network.It then employs a bidirectional gated graph neural network(BiGGNN) to learn the node features within the network structure.Experimental results demonstrate that the DCN4SDP model significantly outperforms traditional machine learning classifiers and other deep learning methods on multiple standard datasets.For instance,the DCN4SDP achieves an AUC value of 0.91 and a F1 score of 0.76,both notably higher than those of other compared models.These advantages indicate that integrating the developer dimension into software defect prediction can effectively enhance the model's predictive capabilities and practical value,providing new insights and directions for future research in software defect prediction.
-
Class Integration Test Order Generation Approach Fused with Deep Reinforcement Learning andGraph Convolutional Neural Network
王晨源, 张艳梅, 袁冠. 融合深度强化学习和图卷积神经网络的类集成测试序列生成方法[J]. 计算机科学, 2025, 52(6): 58-65.
WANG Chenyuan, ZHANG Yanmei, YUAN Guan. Class Integration Test Order Generation Approach Fused with Deep Reinforcement Learning andGraph Convolutional Neural Network[J]. Computer Science, 2025, 52(6): 58-65. - WANG Chenyuan, ZHANG Yanmei, YUAN Guan
- Computer Science. 2025, 52 (6): 58-65. doi:10.11896/jsjkx.240700115
-
Abstract
PDF(2584KB) ( 116 )
- References | Related Articles | Metrics
-
Class integration testing ensures normal interaction and collaboration between multiple classes in the software system and a reasonable class integration test order can reduce testing costs.Therefore,in order to reduce the testing cost of class integration test orders in programs,domestic and foreign researchers have proposed a variety of methods for generating class integration test orders.However,the testing cost of class integration test orders generated by existing methods is too high.To solve this problem,a class integration test order generation approach combining deep reinforcement learning and graph convolutional neural network is proposed.This approach first uses graph convolutional network as the neural network part of deep reinforcement learning,and improves the network structure of the agent and environmental status,so that the environment and the agent can interact based on graph-structured data,and then through design the basic elements such as action space and reward function in reinforcement learning,and complete the generation scenario of the class integration test order.Ultimately,the agent can obtain the best class integration test order through continuous learning and trying.Experimental results show that when the overall stubbing complexity is used as the evaluation metric,this approach can reduce the stubbing cost required to generate class integration test order to a certain extent.
-
Design and Research of SIMD Programming Interface for Sunway
姜军, 顾晓阳, 徐坤坤, 吕勇帅, 黄亮明. 面向申威平台的SIMD编程接口设计与研究[J]. 计算机科学, 2025, 52(6): 66-73.
JIANG Jun, GU Xiaoyang, XU Kunkun, LYU Yongshuai, HUANG Liangming. Design and Research of SIMD Programming Interface for Sunway[J]. Computer Science, 2025, 52(6): 66-73. - JIANG Jun, GU Xiaoyang, XU Kunkun, LYU Yongshuai, HUANG Liangming
- Computer Science. 2025, 52 (6): 66-73. doi:10.11896/jsjkx.240700009
-
Abstract
PDF(1910KB) ( 100 )
- References | Related Articles | Metrics
-
In the domestically-produced Sunway high-performance systems,the Sunway GCC compiler finds it is challenging to vectorize complex programs using methods such as automatic vectorization and inline assembly during the compliation process,impeding the performance of domestically-produced Sunway processors.To address the issue of non-vectorizable programs,research and design of SIMD programming interfaces have been conducted within the Sunway compiler.By adding vector machine modes and vector data types in the Sunway GCC compiler based on Sunway vector instructions,the compiler can recognize vector parameter types.Depending on the type and complexity of the vector instruction,different vector instructions are expanded using intrinsic functions,operator expansion,and advanced language expansion,thereby implementing SIMD programming interface functions.Adding different instruction templates to the backend,so that the appropriate instruction templates can be matched,generating assembly code for the corresponding vector instructions.By testing and analyzing the FFTW library and Hyperscan library,it finds that after vectorizing the programs using SIMD programming interfaces,the average acceleration ratios for the FFTW library are 1.97 for the Double class and 2.13 for the Float type,while the average acceleration ratio for Hyperscan is 2.94.
-
Hybrid Quantum-classical Compressed Generative Adversarial Networks Based on Matrix Product Operators
张曜麟, 刘晓楠, 杜帅岐, 廉德萌. 基于矩阵乘积算符的混合量子压缩经典生成对抗网络[J]. 计算机科学, 2025, 52(6): 74-81.
ZHANG Yaolin, LIU Xiaonan, DU Shuaiqi, LIAN Demeng. Hybrid Quantum-classical Compressed Generative Adversarial Networks Based on Matrix Product Operators[J]. Computer Science, 2025, 52(6): 74-81. - ZHANG Yaolin, LIU Xiaonan, DU Shuaiqi, LIAN Demeng
- Computer Science. 2025, 52 (6): 74-81. doi:10.11896/jsjkx.240500017
-
Abstract
PDF(4141KB) ( 70 )
- References | Related Articles | Metrics
-
Neural networks play a pivotal role in artificial intelligence,particularly in image generation.As a popular algorithm in recent years,generative adversarial networks(GANs) have demonstrated superior performance in this area.Quantum computing,merging with traditional AI algorithms,accelerates processing speeds and enhances data security,making it especially suitable for managing high-dimensional data and optimization problems.Within this context,hybrid quantum-classical GANs show promising results.However,these models face challenges in generating high-dimensional images,and the inclusion of linear layers in generatorsresults in elevated parameter counts.Therefore,a hybrid quantum-classical compressed GAN model using matrix pro-duct operators is proposed.This model improves the structure of the block quantum generator,enabling the generation of multiple data blocks in a single call,which enhances efficiency.It integrates the nonlinear properties of classical networks with matrix product operators,ensuring high-quality image generation,speeding up model convergence,and reducing parameter counts.Expe-rimental results show that the optimized generator structure increases total runtime by approximately 92.88%,reduces model parameters by about 5.59%,and surpasses traditional and hybrid quantum-classical models in convergence speed on MNIST and FMNIST datasets,demonstrating its potential for high-dimensional image generation.
-
Pre-selection Optimization for Spill Heuristic on Shenwei Platform
蔡淳豪, 梁淑萍, 姜军, 邵宁远. 基于申威平台寄存器溢出策略的预选先验优化[J]. 计算机科学, 2025, 52(6): 82-87.
CAI Chunhao, LIANG Shuping, JIANG Jun, SHAO Ningyuan. Pre-selection Optimization for Spill Heuristic on Shenwei Platform[J]. Computer Science, 2025, 52(6): 82-87. - CAI Chunhao, LIANG Shuping, JIANG Jun, SHAO Ningyuan
- Computer Science. 2025, 52 (6): 82-87. doi:10.11896/jsjkx.240800128
-
Abstract
PDF(1530KB) ( 77 )
- References | Related Articles | Metrics
-
The C2 just-in-time compiler implemented on the SWJDK allocates registers according to the graph coloring register allocation algorithm.The just-in-time compiler ignores the characteristics of the SW processor when allocating registers,which results in excessive memory access codes.In order toget the most out of SW processor,this paper proposes a compilation optimization algorithm.The optimization algorithm is based on the graph coloring register allocation algorithm.And the spill strategy is adjusted based on a priori assumptions about the characteristics of registers representing special information on the SW server.The proposed algorithm has been implemented in SWJDK.The optimization of the algorithm also has been verified based on the benchmark SPECjbb2015 and SPECjvm2008.After optimization,the max-jOPS of SPECjbb2015 increases by 4.20% and critical-jOPS of SPECjbb2015 increases by 5.98%.The SPECjvm2008 increases by 2.02%。
-
Semi-supervised Learning Flow Field Prediction Method Based on Gaussian Mixture Discrimination
王枭, 李冠雄, 李娜, 袁东风. 基于高斯混合判别的半监督学习流场预测方法[J]. 计算机科学, 2025, 52(6): 88-95.
WANG Xiao, LI Guanxiong, LI Na, YUAN Dongfeng. Semi-supervised Learning Flow Field Prediction Method Based on Gaussian Mixture Discrimination[J]. Computer Science, 2025, 52(6): 88-95. - WANG Xiao, LI Guanxiong, LI Na, YUAN Dongfeng
- Computer Science. 2025, 52 (6): 88-95. doi:10.11896/jsjkx.241100026
-
Abstract
PDF(2368KB) ( 90 )
- References | Related Articles | Metrics
-
Deep learning has garnered significant attention in aircraft design,particularly with the advancements driven by AI for Science.Data-driven methods based on neural networks have achieved remarkable success in airfoil flow field prediction.How-ever,these methods often underperform when labeled data is limited.This paper proposes a semi-supervised learning(SSL) me-thod named Semi-Flow for airfoil flow field prediction.Semi-Flow leverages the memory properties of neural network loss to classi-fy pseudo-labeled data into easy and hard subsets based on loss function values.This clustering method is based on Gaussian mixture model(GMM).The loss function combines data loss with auxiliary physical supervision,ensuring the model's outputs conform to aerodynamic properties and data constraints.During the data selection process,the easy samples common to both mo-dels are chosen as training samples,thereby avoiding the impact of noisy samples.The training process starts with several rounds of warm-up training on the labeled samples,followed by the gradual inclusion of filtered easy samples.Experimental results demonstrate that the Semi-Flow method significantly outperforms models trained solely on limited labeled data,with an overall predictionperformance improvement of nearly 30%.Ablation studies and qualitative results further validate the effectiveness of the proposed method.Semi-Flow exemplifies the potential of AI for Science,offering a promising approach to flow field prediction by reducing the dependency on large amounts of labeled data.
-
Survey of Transformer-based Time Series Forecasting Methods
陈嘉俊, 刘波, 林伟伟, 郑剑文, 谢家晨. 基于Transformer的时间序列预测方法综述[J]. 计算机科学, 2025, 52(6): 96-105.
CHEN Jiajun, LIU Bo, LIN Weiwei, ZHENG Jianwen, XIE Jiachen. Survey of Transformer-based Time Series Forecasting Methods[J]. Computer Science, 2025, 52(6): 96-105. - CHEN Jiajun, LIU Bo, LIN Weiwei, ZHENG Jianwen, XIE Jiachen
- Computer Science. 2025, 52 (6): 96-105. doi:10.11896/jsjkx.240500043
-
Abstract
PDF(2287KB) ( 113 )
- References | Related Articles | Metrics
-
Time series forecasting,a critical technique for analyzing historical data to predict future trends,has been widely applied in fields such as finance and meteorology.However,traditional methods like the autoregressive moving average model and exponential smoothing face limitations when dealing with nonlinear patterns and capturing long-term dependencies.Recently,Transformer-based approaches,due to their self-attention mechanism,have achieved breakthroughs in natural language processing and computer vision,and have also shown significant promise in time series forecasting.Therefore,exploring how to efficiently apply Transformers to time series prediction has become crucial for advancing this field.This paper first introduces the characte-ristics of time series data and explains the common task categories and evaluation metrics for time series forecasting.It then delves into the basic architecture of the Transformer model and selects Transformer-derived models that have garnered widespread attention in recent years for time series forecasting.These models are categorized based on their modules and architectures,and are compared and analyzed from three perspectives:problem-solving capabilities,innovations,and limitations.Finally,this paper discusses potential future research directions for the application of Transformers in time series forecasting.
-
DCDAD:Differentiated Context Dependency for Time Series Anomaly Detection Method
廖思睿, 黄飞虎, 战鹏祥, 彭舰, 张凌浩. DCDAD:考虑上下文依赖差异化的时间序列异常检测模型[J]. 计算机科学, 2025, 52(6): 106-117.
LIAO Sirui, HUANG Feihu, ZHAN Pengxiang, PENG Jian, ZHANG Linghao. DCDAD:Differentiated Context Dependency for Time Series Anomaly Detection Method[J]. Computer Science, 2025, 52(6): 106-117. - LIAO Sirui, HUANG Feihu, ZHAN Pengxiang, PENG Jian, ZHANG Linghao
- Computer Science. 2025, 52 (6): 106-117. doi:10.11896/jsjkx.240600001
-
Abstract
PDF(4163KB) ( 80 )
- References | Related Articles | Metrics
-
Time series anomaly detection aims to identify data points or segments in a time series that deviate from normal patterns.Enhancing detection accuracy by effectively utilizing contextual information in time series is a key role in constructing anomaly detection models.However,existing methods inadequately consider the differential context dependency in the data and lack explicit modeling of anomalous samples,resulting in poor discrimination between normal and anomalous samples and suboptimal detection performance.Therefore,this paper proposes a model that considers differential context dependency for time series anomaly detection(DCDAD),which enhances to learn the differential representations of context dependency.The DCDAD model captures temporal context dependency using self-attention mechanisms and learns hyperspheres for discriminating between normal and anomalous samples during the clustering process.By adopting the concept of anomaly injection,the dataset is augmented to address the issue of limited anomalous samples.Additionally,a targeted objective function for differentiated learning is designed to amplify the differences between normal and anomalous samples,thereby improving the anomaly detection performance.Extensive experiments conduct on five real-world time series datasets,and the results show an improvement of approximately 1.2% in terms of the F1 score compared to state-of-the-art algorithms,validating the effectiveness of learning context dependency in a differentiated manner for improving the anomaly detection performance of the model.Furthermore,sensitivity analysis of parameters and ablation experiments validate the stability and effectiveness of the proposed model.
-
Dynamic Link Prediction Method for Adaptively Modeling Network Dynamics
郭翾, 侯锦霖, 王文俊, 焦鹏飞. 自适应建模网络动力学的动态链路预测方法[J]. 计算机科学, 2025, 52(6): 118-128.
GUO Xuan, HOU Jinlin, WANG Wenjun, JIAO Pengfei. Dynamic Link Prediction Method for Adaptively Modeling Network Dynamics[J]. Computer Science, 2025, 52(6): 118-128. - GUO Xuan, HOU Jinlin, WANG Wenjun, JIAO Pengfei
- Computer Science. 2025, 52 (6): 118-128. doi:10.11896/jsjkx.240400033
-
Abstract
PDF(3610KB) ( 57 )
- References | Related Articles | Metrics
-
Dynamic network link prediction is one of the core issues in understanding and analyzing dynamic networks.In response to the challenges of capturing complex network structures and real evolution patterns faced by link prediction,this paper proposes a method integrating the graph neural network and neural ordinary differential equation to adaptive model various network dynamics:double-layer activity-constrained neural ordinary differential equation model(DANOM).DANOM integrates the importance and relative positional information of nodes to enhance the representation of network structures,strengthens the learning process of evolution patterns through neural ordinary differential equation units constrained by node activity,and mines effective information of the network under the reconstruction loss of node activity and node representation.DANOM achieves optimal results in various down-stream tasks on multiple real-world datasets.It achieves the highest improvements of 14% and 9.7% in terms of AUC and AP,respectively,in the single-step link prediction task.In cases of snapshot missingness,the average AUC and AP of link prediction are only reduced by 0.43% and 0.03%,respectively.In the user stitching experiments,DANOM achieves the highest improvements of 20.6% and 24.4% in terms of AUC and AP,respectively.
-
Outlier Detection Method Based on Adaptive Graph Autoencoder
谭淇尹, 于炯, 陈子歆. 基于自适应图自编码器的离群点检测方法[J]. 计算机科学, 2025, 52(6): 129-138.
TAN Qiyin, YU Jiong, CHEN Zixin. Outlier Detection Method Based on Adaptive Graph Autoencoder[J]. Computer Science, 2025, 52(6): 129-138. - TAN Qiyin, YU Jiong, CHEN Zixin
- Computer Science. 2025, 52 (6): 129-138. doi:10.11896/jsjkx.240500092
-
Abstract
PDF(4221KB) ( 58 )
- References | Related Articles | Metrics
-
Outlier detection involves identifying a small number of individuals in a dataset that differ from the majority of samples,thereby obtaining insights into the overall health and abnormal information of the data.Currently,in the context of Euclidean structured datasets,most detection algorithms predominantly treat data as independent entities,overlooking the correlations between data instances.This informational bias hinders the effective identification of potential outliers that might exist within the normal data regions.To address this issue,this paper proposes a deep joint representation learning algorithm named adaptive neighbor graph autoencoder(ANGAE).This algorithm constructs a graph from the perspective of graph generation to capture the relationships between data points and leverages structural and attribute autoencoders to learn latent representations of the data.ANGAE introduces an adaptive neighbor graph construction mechanism to dynamically update the graph structure,ensuring the adjustment and improvement of inaccurate graph structures during model training.By integrating structural embeddings and attribute embeddings,ANGAE facilitates effective interaction between network structure and node attributes.Experimental results demonstrate that the proposed method achieves superior performance across 11 datasets,maintaining high precision while exhibiting robust resilience,thereby substantiating the method's efficacy.
-
Online Capricious Data Stream Learning with Sparse Labels
张帅, 周鹏, 张燕平. 标签稀疏场景下任意数据流在线学习方法[J]. 计算机科学, 2025, 52(6): 139-150.
ZHANG Shuai, ZHOU Peng, ZHANG Yanping. Online Capricious Data Stream Learning with Sparse Labels[J]. Computer Science, 2025, 52(6): 139-150. - ZHANG Shuai, ZHOU Peng, ZHANG Yanping
- Computer Science. 2025, 52 (6): 139-150. doi:10.11896/jsjkx.240300155
-
Abstract
PDF(2327KB) ( 55 )
- References | Related Articles | Metrics
-
With the dramatic increase in data volume,machine learning methods have gradually transitioned from traditional static learning to online learning modes that are designed for streaming data.Capricious data streams refer to data instances arriving over time in a sequential manner,where the feature space can potentially undergo capricious changes.It means that old features may disappear at any time,while new features may emerge.For example,in the field of environmental monitoring,the addition of new sensors or sudden anomalies in existing sensors can cause arbitrary changes in the feature space of the data stream.Furthermore,existing online learning methods for data streams often assume access to the true labels of all data instances.However,in real-world applications,data labeling is often sparse due to the high cost of manual annotation.Therefore,to address the problem of online learning in capricious data streams with sparse labels,a passive-active learning-based online learning algorithm called PAACDS(Passive Aggressive Active Learning for Capricious Data Streams),along with its variant PAACDS-I,is proposed.Firstly,an online active learning method is utilized to select valuable data instances,allowing the construction of superior prediction models with minimal supervision.Subsequently,after obtaining the queried labels for the selected data instances,the dynamic classifier,which encompasses the shared and newly added feature spaces in the capricious data streams,is updated using online passive-active update rules and the principle of boundary maximization.Finally,the proposed algorithm is compared to existing state-of-the-art methods on twelve datasets.Extensive experimental comparisons and analyses validate the effectiveness of the proposed algorithm in scenarios involving capricious data streams and sparse labels.
-
Relationship Between Triadic Concept Reducts and Concept Reducts
李萱, 张琴, 魏玲. 三元概念约简与概念约简的关系[J]. 计算机科学, 2025, 52(6): 151-158.
LI Xuan, ZHANG Qin, WEI Ling. Relationship Between Triadic Concept Reducts and Concept Reducts[J]. Computer Science, 2025, 52(6): 151-158. - LI Xuan, ZHANG Qin, WEI Ling
- Computer Science. 2025, 52 (6): 151-158. doi:10.11896/jsjkx.240600055
-
Abstract
PDF(1525KB) ( 48 )
- References | Related Articles | Metrics
-
Triadic concept analysis is an extension of formal concept analysis,and the ternary relation reflected in the triadic context corresponds to the binary relation based on the condition,so there is a close connection between the concept reduction that keeps the binary relation unchanged and the triadic concept reduction that keeps the ternary relation unchanged.Based on this,the relationship between concept reducts and triadic concept reducts is investigated.Firstly,the data under each condition in the tri-adic context is regarded as a formal context,and according to the relationship between formal concepts and triadic concepts,it is proved that the sets formed by the concept reducts of all conditionally determined formal contexts can generate triadic concept consistent sets.Secondly,using the set formed by triadic concepts according to the same conditions can get the set of concepts under the corresponding conditions.It is further proved that the concept consistent sets of formal contexts determined by all conditions can be generated from triadic concept reducts,and the equivalent proposition that triadic concept reducts and concept reducts are generated by each other is given.Finally,the relationship between the three types of formal concepts and the three types of triadic concepts is discussed with respect to the three types of concepts that play different roles in concept reducts and triadic concept reducts.
-
Semi-supervised Cross-modal Hashing Method for Semantic Alignment Networks Basedon GAN
刘华咏, 朱婷. 基于GAN的语义对齐网络半监督跨模态哈希方法[J]. 计算机科学, 2025, 52(6): 159-166.
LIU Huayong, ZHU Ting. Semi-supervised Cross-modal Hashing Method for Semantic Alignment Networks Basedon GAN[J]. Computer Science, 2025, 52(6): 159-166. - LIU Huayong, ZHU Ting
- Computer Science. 2025, 52 (6): 159-166. doi:10.11896/jsjkx.240400022
-
Abstract
PDF(2464KB) ( 73 )
- References | Related Articles | Metrics
-
Supervised methods have achieved a lot of results in cross-modal retrieval and have become popular methods.How-ever,these methods rely too much on labeled data and do not make full use of the rich information contained in unlabeled data.To solve this problem,unsupervised methods have been studied,but when relying solely on unlabeled data,the results are not ideal.Therefore,this paper proposes a semi-supervised cross-modal hashing method for semantic alignment networks based on GAN(GAN-SASCH).This model is based on generative adversarial networks that incorporate the concept of semantic alignment.The generative adversarial network is divided into two modules.The generator learns to fit the correlation distribution of the unlabeled data and generates a spurious data sample,and the discriminator is used to determine whether the data pair sample comes from the dataset or the generator.By developing a very small adversarial game between these two modules,the performance of the ge-nerative adversarial network is continuously improved.Semantic alignment can make full use of the interaction and symmetry between different modalities,unify the similarity information of different modalities,and effectively guide the learning process of hash code.In this paper,adaptive learning optimization parameters are also introduced to improve the performance of the model.On NUS-WIDE and MIRFLICKR25K datasets,we compare the proposed method with 9 related frontier methods,and verify the effectiveness of the proposed method by using two evaluation indicators,MAP and PR map.
-
Semantic-aware Heterogeneous Graph Attention Network Based on Multi-view RepresentationLearning
王静红, 吴芝冰, 王熙照, 李昊康. 基于多视图表示学习的语义感知异质图注意力网络[J]. 计算机科学, 2025, 52(6): 167-178.
WANG Jinghong, WU Zhibing, WANG Xizhao, LI Haokang. Semantic-aware Heterogeneous Graph Attention Network Based on Multi-view RepresentationLearning[J]. Computer Science, 2025, 52(6): 167-178. - WANG Jinghong, WU Zhibing, WANG Xizhao, LI Haokang
- Computer Science. 2025, 52 (6): 167-178. doi:10.11896/jsjkx.240600032
-
Abstract
PDF(4677KB) ( 72 )
- References | Related Articles | Metrics
-
In recent years,graph neural networks have received widespread attention for their ability to efficiently process complex structures and rich semantic information in heterogeneous graphs.Learning low-dimensional node embeddings of heterogeneous graphs while preserving the heterogeneous structure and semantics for downstream tasks such as node classification and node clustering is a critical and challenging problem.Existing studies mainly design models based on meta-paths,but this approach faces at least two limitations.1)The selection of suitable meta-paths usually requires expert knowledge or additional labelling information.2)The approach restricts the model from learning by predefined patterns,which makes it difficult to adequately capture the complexity of the network.To address these issues,a multi-view and semantic-aware heterogeneous graph attention network(MS-HGANN) is proposed to merge nodes and relationships without manually designing meta-paths with the MS-HGANN consists of three main components:feature mapping,second-order view-specific self-graph fusion,and semantic aware.Feature mapping maps features to a uniform node feature space.Second-order view-specific self-graph fusion designs relationship-specific encoders and node attention to learn node representations on local structures.Semantic aware designs two coordinated attention mechanisms to evaluate the importance of nodes and relationships to obtain the final node representations.Experimental results on three publicly available datasets show that the proposed model is state-of-the-art for node classification and clustering tasks.
-
Ship License Plate Recognition Network Based on Pyramid Transformer in Transformer
王腾, 冼允廷, 徐浩, 谢宋褀, 邹全义. 基于多层次嵌套Transformer的船名识别网络[J]. 计算机科学, 2025, 52(6): 179-186.
WANG Teng, XIAN Yunting, XU Hao, XIE Songqi, ZOU Quanyi. Ship License Plate Recognition Network Based on Pyramid Transformer in Transformer[J]. Computer Science, 2025, 52(6): 179-186. - WANG Teng, XIAN Yunting, XU Hao, XIE Songqi, ZOU Quanyi
- Computer Science. 2025, 52 (6): 179-186. doi:10.11896/jsjkx.240500064
-
Abstract
PDF(3596KB) ( 88 )
- References | Related Articles | Metrics
-
Ship identification is of great significance and widely used in the regulation of waterborne targets.As one of the important components of ship identification,accurate identification of ship name can make up for the shortcomings of traditional AIS identification methods and improve the accuracy of ship identification.Compared with the traditional Chinese text recognition,due to the complex water environment,large changes in light,serious corrosion of ship hulls,and non-standardized ship names,ship name images have low clarity,text mutilation,inconsistent font styles and other problems,which make ship name recognition difficult and low accuracy.In this paper,a lightweight recognition network based on Pyramid Transformer in Transformer is proposed to solve the problems in ship name recognition.Firstly,the input image is processed by a spatial transform network to correct the tilt of the ship name.Then,the Transformer in Transformer module is utilized to efficiently extract the multi-granularity features of the image.Finally,the text and radical are recognized at different scales.Experimental results show that the proposed algorithm has excellent performance in ship name recognition compared with other text recognition methods.The accuracy reaches 92.68% on CSLD dataset,94.50% on SCSLD dataset,and 66.34% on DCSLD dataset.At the same time,this method is characterized by a low number of parameters and a high frame rate.
-
PRNU Fingerprint Purification Algorithm for Open Environment
刘宇飞, 肖延辉, 田华伟. 面向开放环境的PRNU指纹提纯算法[J]. 计算机科学, 2025, 52(6): 187-199.
LIU Yufei, XIAO Yanhui, TIAN Huawei. PRNU Fingerprint Purification Algorithm for Open Environment[J]. Computer Science, 2025, 52(6): 187-199. - LIU Yufei, XIAO Yanhui, TIAN Huawei
- Computer Science. 2025, 52 (6): 187-199. doi:10.11896/jsjkx.241100190
-
Abstract
PDF(5617KB) ( 66 )
- References | Related Articles | Metrics
-
Non-unique artifacts(NUAs) noise generated by digital image post-processing pipeline is mixed in unique and stable photo response non-uniformity(PRNU) fingerprints.It seriously affects the precision of the downstream source camera identification(SCI) task.However,existing NUAs suppression schemes mainly target experimental environments and require not only additional hyperparameter settings,but also additional computing resources and storage space,which are difficult to apply in open environments.To solve this problem,this paper proposes a PRNU fingerprint purification algorithm for open environments.Firstly,it improves the existing PRNU fingerprint correlation metric peak-to-correlation energy ratio(PCE) and proposes PCE_norm and PCE_denuas based on normalization to achieve adaptive correlation measurement in open environment.Then,NUAs offline suppression is realized by constructing a contrastive learning mechanism to reduce the distance of the same fingerprint and amplify different fingerprints,so that there is no need for additional computation and storage costs for online suppression in SCI tasks.Finally,experiments on Dresden and Daxing datasets demonstrate the effectiveness and robustness of the proposed algorithm.
-
Cross-subject Driver Fatigue Detection Based on Local and Global Feature Integrated Network
龚子安, 顾正晖, 陈迪. 基于局部与全局特征集成网络的跨被试驾驶疲劳检测[J]. 计算机科学, 2025, 52(6): 200-210.
GONG Zian, GU Zhenghui, CHEN Di. Cross-subject Driver Fatigue Detection Based on Local and Global Feature Integrated Network[J]. Computer Science, 2025, 52(6): 200-210. - GONG Zian, GU Zhenghui, CHEN Di
- Computer Science. 2025, 52 (6): 200-210. doi:10.11896/jsjkx.240300124
-
Abstract
PDF(2816KB) ( 73 )
- References | Related Articles | Metrics
-
Driver fatigue detection plays a crucial role in reducing traffic accidents.Electroencephalogram(EEG) signals,recognized as effective indicators that directly reflect a driver's mental state,are widely acknowledged as valuable tools for fatigue detection.However,the inherent high noise characteristics of EEG signals and their significant variability across individuals pose considerable challenges for cross-subject driver fatigue detection.To address these challenges,this paper proposes an integrated network based on local feature processing and global feature processing to extract features from EEG signals,aiming at overcoming the issues in cross-subject fatigue detection.When applied to the SEED-VIG dataset for a cross-subject three-class detection task,this model achieves an accuracy of 61.34%,significantly surpassing baseline methods.To enhance the performance of the model further,it employs and refines transfer learning methods,resulting in a 13.35% increase in model accuracy for the cross-subject three-class detection task.Overall,this study has demonstrated promising results in EEG-based cross-subject driver fatigue detection,offering new strategies for future studies in this direction.
-
Parameter Estimation of Intravoxel Incoherent Motion Based on Prior-driven
胡国栋, 叶晨. 基于先验驱动的体素内不相干运动的参数估计[J]. 计算机科学, 2025, 52(6): 211-218.
HU Guodong, YE Chen. Parameter Estimation of Intravoxel Incoherent Motion Based on Prior-driven[J]. Computer Science, 2025, 52(6): 211-218. - HU Guodong, YE Chen
- Computer Science. 2025, 52 (6): 211-218. doi:10.11896/jsjkx.240300060
-
Abstract
PDF(4525KB) ( 55 )
- References | Related Articles | Metrics
-
Intravoxel incoherent motion(IVIM) model leverages diffusion-weighted magnetic resonance imaging(DWI) to non-invasively ascertain the diffusion coefficient of water molecules in living tissue(D) and to gather blood perfusion data(F,D*).However,conventional methods for estimating IVIM parameters are particularly susceptible to noise,which poses a significant challenge in abdominal organs like the liver where respiratory motion is prevalent.This sensitivity often compromises the efficacy of parameter estimation.To enhance the robustness against noise,this study introduces a novel algorithm,the prior-driven neural network(PDNN).This approach harnesses prior knowledge derived from fully supervised training to inform and guide unsupervised learning phases.The robustness of PDNN model to noise is systematically assessed using root mean square errors(RMSE) across various signal-to-noise ratios.Additionally,the coefficient of variation(CV) distribution is employed to effectively differentiate between healthy and cirrhotic liver tissues,indicating significant variations(P<0.05) that underscore the model's diagnostic capability.The performance of the PDNN algorithm is compared with other advanced methods,including the nonlinear least squares approach,the voxel-based deep learning method IVIM-NEToptim,and SSUN,a 2D convolutional network grounded in domain-specific information.The results demonstrate that PDNN outperforms these methods in terms of noise robustness.Speci-fically,the RMSE values for the fitting parameters [D,F,D*] in the proposed model are 27.63%,23.72%,and 31.46% lower,respectively,than those recorded by the sub-optimal method.Moreover,PDNN not only preserves the integrity of tissue structure information but also effectively distinguishes between healthy and cirrhotic livers,highlighting its potential as a superior tool for clinical diagnosis and evaluation.
-
Depression Recognition Based on Speech Corpus Alignment and Adaptive Fusion
沈心旸, 王善敏, 孙玉宝. 基于语音语料对齐与自适应融合的抑郁症识别[J]. 计算机科学, 2025, 52(6): 219-227.
SHEN Xinyang, WANG Shanmin, SUN Yubao. Depression Recognition Based on Speech Corpus Alignment and Adaptive Fusion[J]. Computer Science, 2025, 52(6): 219-227. - SHEN Xinyang, WANG Shanmin, SUN Yubao
- Computer Science. 2025, 52 (6): 219-227. doi:10.11896/jsjkx.240400150
-
Abstract
PDF(2695KB) ( 47 )
- References | Related Articles | Metrics
-
Depression has become a significant global public health issue.Speech-based depression recognition aims to recognize depression in an easily scalable and cost-effective manner.Prior studies often divide long speech into multiple slices,and optimize models with them independently or further establish their relationship via temporal modules.They fail to make the most of the intra- and inter-relationships between segmented speech,concurrently introducing some task-irrelevant information.This paper proposes a depression recognition method based on speech corpus alignment and adaptive fusion.After segmenting the input speech,multi-granularity feature correlation is established through a multi-head attention mechanism,and the segment importance mining module is used to automatically learn the importance of different segments.This method effectively integrates local and global features,significantly improving recognizing performance.The proposed method achieves a weighted accuracy of 82.59%,an unweighted accuracy of 82.17%,and an F1 score of 82.23%,respectively,on the MODMA database.On the SEARCH database,the weighted accuracy,unweighted accuracy,and F1 score are 74.44%,68.33%,and 69.25%,respectively.The experiments demonstrate that the proposed model can accurately recognize depression,outperforming existing works.
-
Research on Depth Image Super-resolution Algorithm for High and Low Frequency Feature Modulation Fusion Guided by Color Images
徐晗智, 李嘉莹, 梁宇栋, 魏巍. 彩色图像引导高低频特征调制融合的深度图像超分辨率算法研究[J]. 计算机科学, 2025, 52(6): 228-238.
XU Hanzhi, LI Jiaying, LIANG Yudong, WEI Wei. Research on Depth Image Super-resolution Algorithm for High and Low Frequency Feature Modulation Fusion Guided by Color Images[J]. Computer Science, 2025, 52(6): 228-238. - XU Hanzhi, LI Jiaying, LIANG Yudong, WEI Wei
- Computer Science. 2025, 52 (6): 228-238. doi:10.11896/jsjkx.241200092
-
Abstract
PDF(3581KB) ( 59 )
- References | Related Articles | Metrics
-
Depth images effectively describe the information of a 3D scene.However,the acquisition equipment and imaging environment limit the resolution and high-frequency information of the depth images acquired by depth sensors.It is imperative to improve the resolution of depth images.Some depth map super-resolution algorithms have significantly improved their performance by introducing RGB images from the same scene to provide guidance information for the depth map super-resolution process.The key challenge lies in effectively leveraging the RGB information to guide the depth map super-resolution reconstruction process,addressing the modal inconsistency between the depth map and RGB images.Existing methods primarily focus on high-frequency information,overlooking the low-frequency global information crucial for algorithm performance.To address these limitations,this paper proposes a novel color image-guided,high and low-frequency feature modulation fusion super-resolution reconstruction algorithm for depth maps.A two-branch feature extraction module extracts high and low frequency features from color and depth images,respectively.CNN and Transformer are used in each branch to extract local high frequency and global low frequency information.A two-way transformation and fusion between high frequency information and low frequency information of color and depth images is achieved by constructing a two-way modulation module.The model fully exploits the complementary information between the depth image and the color image.It uses a bidirectional modulation within different modes and different frequencies and the subsequent fusion of high and low-frequency information.The depth super-resolution algorithm based on the guidance of the color image can achieve better reconstruction results.The lossless information compression using reversible neural network INN extracts high-frequency detail information more effectively,and the quadtree attention mechanism reduces the computational complexity of the Transformer in extracting global information,improving the efficiency of the algorithm.The experimental results on the public datasets show that the proposed method outperforms the comparison methods in both quantitative and qualitative aspects,achieving better subjective visualization results.
-
Multi-scale Feature Fusion Residual Denoising Network Based on Cascade
郭业才, 胡晓伟, 毛湘南. 基于级联的多尺度特征融合残差去噪网络[J]. 计算机科学, 2025, 52(6): 239-246.
GUO Yecai, HU Xiaowei, MAO Xiangnan. Multi-scale Feature Fusion Residual Denoising Network Based on Cascade[J]. Computer Science, 2025, 52(6): 239-246. - GUO Yecai, HU Xiaowei, MAO Xiangnan
- Computer Science. 2025, 52 (6): 239-246. doi:10.11896/jsjkx.240300058
-
Abstract
PDF(3411KB) ( 77 )
- References | Related Articles | Metrics
-
In order to address the problems of singularization of image denoising feature extraction and low feature utilization,which cannot generate clearer images,a cascaded multi-scale feature fusion residual real image denoising network is proposed.The network's dual-branch adaptive dense residual block uses dual-path asymmetric dilation convolution to expand the image receptive field to selectively extract rich texture features on the horizontal scale.In the multi-scale space U-Net module,the multi-scale space fusion block is used to enhance the network's learning ability of the overall image structure,learn different levels of information,and acquire multi-level features based on image space and context information.Skip connections facilitates parameter sharing among structures,fully integrating features at different scales and ensuring the integrity of information.Finally,dual residual learning is used to generate clear denoised images.Results show that the peak signal-to-noise ratio of the proposed algorithm on real noise datasets(DND and SIDD) is 39.68 dB and 39.50 dB respectively,and the structural similarity is 0.953 and 0.957 respectively,which is better than the mainstream denoising algorithm.The proposed algorithm enhances denoising performance while retaining more detailed information,further improving image quality.
-
Oriented Object Detection Based on Multi-scale Perceptual Enhancement
张达斌, 吴秦, 周浩杰. 基于多尺度感知增强的旋转目标检测[J]. 计算机科学, 2025, 52(6): 247-255.
ZHANG Dabin, WU Qin, ZHOU Haojie. Oriented Object Detection Based on Multi-scale Perceptual Enhancement[J]. Computer Science, 2025, 52(6): 247-255. - ZHANG Dabin, WU Qin, ZHOU Haojie
- Computer Science. 2025, 52 (6): 247-255. doi:10.11896/jsjkx.240300076
-
Abstract
PDF(4269KB) ( 88 )
- References | Related Articles | Metrics
-
Oriented object detection in remote sensing images is more challenging due to the issues of complex background,dense distribution and with arbitrary direction,large-scale variation,high aspect-ratio of objects.To address these issues,this paper proposes a framework for oriented object detection in remote sensing images based on multi-scale perception enhancemen.Firstly,a multi-scale perceptual enhancement module is proposed in the feature extraction stage,which employs different convolutional blocks for extracting features for different levels of feature maps to ensure that the low-level feature maps retain enough detail information and the high-level feature maps extract enough semantic information.So that the extracted multilevel feature maps have the ability of adaptive feature learning for different scales.Meanwhile,an adaptive channel attention module is used to adaptively learn the channel weights to mitigate the effects of the complex background.Secondly,a size-sensitive rotated Itersection over Union(IoU) loss is proposed to supervise the network to learn the size information of the target and increase the sensitivity to high aspect ratio targets by adding the loss terms of objects' aspect ratio and area in the loss.The proposed method achieves 77.64%,98.32%,and 66.14% mAP on the publicly available remote sensing image datasets DOTA,HRSC2016,and DIOR-R,respectively.The detection accuracies of the proposed framework outperform existing state-of-the-art remote sensing image detection networks.
-
Saliency Mask Mixup for Few-shot Image Classification
陈亚当, 高宇轩, 卢楚翰, 车洵. 基于显著性掩模混合的小样本图像分类[J]. 计算机科学, 2025, 52(6): 256-263.
CHEN Yadang, GAO Yuxuan, LU Chuhan, CHE Xun. Saliency Mask Mixup for Few-shot Image Classification[J]. Computer Science, 2025, 52(6): 256-263. - CHEN Yadang, GAO Yuxuan, LU Chuhan, CHE Xun
- Computer Science. 2025, 52 (6): 256-263. doi:10.11896/jsjkx.240600123
-
Abstract
PDF(3105KB) ( 59 )
- References | Related Articles | Metrics
-
Few-shot image classification addresses the problem of poor performance in traditional image classification when data is scarce.The challenge lies in effectively utilizing sparse sample label data to predict the true feature distribution.To tackle this,some recent methods adopt data augmentation techniques such as random mas-king or mixed interpolation to enhance the diversity and generalization of data label samples.However,there are still the following issues:1)Due to the uncertainty of random masking,situations where the foreground is either completely masked or exposed may occur,leading to the loss of crucial information in samples;2)Because the data distribution after mixed interpolation tends to be overly uniform,models find it difficult to accurately distinguish differences between different classes,thus failing to effectively delineate boundaries between different categories.To address these problems,this paper proposes a data augmentation method based on Saliency Mask Mixup.Firstly,through Mask Mix(M-Mix) and Confident Clip Selector(CCS),adaptive selection and retention of key feature information in images are performed.Secondly,using Saliency Fuse(SF),the importance of various regions in the image is calculated to guide image fusion,making the resulting images more diverse and rich,thereby making category boundaries clearer.The proposed method demonstrates outstanding performance on multiple standard few-shot image classification datasets(such as miniImage-Net,tiered-ImageNet,Few-shot CIFAR100,and Caltech-UCSD Birds-200),outperforming state-of-the-art methods by approximately 0.2~1%.These results indicate significant potential and advantages of the proposed method in few-shot image classification.
-
Two-stage Left Atrial Scar Segmentation Based on Multi-scale Attention and Uncertainty Loss
张鑫艳, 唐振超, 李一夫, 刘振宇. 基于多尺度注意力和不确定性损失的两阶段左心房疤痕分割[J]. 计算机科学, 2025, 52(6): 264-273.
ZHANG Xinyan, TANG Zhenchao, LI Yifu, LIU Zhenyu. Two-stage Left Atrial Scar Segmentation Based on Multi-scale Attention and Uncertainty Loss[J]. Computer Science, 2025, 52(6): 264-273. - ZHANG Xinyan, TANG Zhenchao, LI Yifu, LIU Zhenyu
- Computer Science. 2025, 52 (6): 264-273. doi:10.11896/jsjkx.241200197
-
Abstract
PDF(3363KB) ( 63 )
- References | Related Articles | Metrics
-
Atrial Fibrillation (AF) is one of the most common arrhythmias clinically.Accurate segmentation and area assessment of the left atrium and its scar area after myocardial infarction are of great clinical significance for the early diagnosis,treatment planning and prognosis assessment of AF in patients with myocardial infarction.The deep learning-based method is the mainstream direction for automatic segmentation of the left atrium and the scar area after myocardial infarction.However,as the scar after myocardial infarction is small in size and easily affected by the surrounding enhanced tissue,the segmentation accuracy still remains to be improved.Therefore,a two-stage deep learning model based on multi-scale attention and uncertainty loss is proposed.On the one hand,a Multi-Scale Attention Module (MSAM) is introduced before sampling on the network.This module can encode rich multi-scale semantic information and make the model pay more attention to important semantic and spatial information.On the other hand,uncertainty loss is introduced to enhance the model's ability to model scar uncertainty.In addition,this study also uses histogram matching (HM) to enhance image quality and improve the segmentation ability of the network.The proposed methodis verified on the validation set and the left atrial and scar quantification and segmentation (LAScarQS++) evaluation platform.The experimental results show that the scar segmented by this method is more complete and the segmentation accuracy is also improved.Compared with nnU-Net,the Dice coefficient (Dice) of scar segmentation after myocardial infarction is increased by 8.12%.
-
FDiff-Fusion:Medical Image Diffusion Fusion Network Segmentation Model Driven Based onFuzzy Logic
耿胜, 丁卫平, 鞠恒荣, 黄嘉爽, 姜舒, 王海鹏. FDiff-Fusion:基于模糊逻辑驱动的医学图像扩散融合网络分割模型[J]. 计算机科学, 2025, 52(6): 274-285.
GENG Sheng, DING Weiping, JU Hengrong, HUANG Jiashuang, JIANG Shu, WANG Haipeng. FDiff-Fusion:Medical Image Diffusion Fusion Network Segmentation Model Driven Based onFuzzy Logic[J]. Computer Science, 2025, 52(6): 274-285. - GENG Sheng, DING Weiping, JU Hengrong, HUANG Jiashuang, JIANG Shu, WANG Haipeng
- Computer Science. 2025, 52 (6): 274-285. doi:10.11896/jsjkx.240600006
-
Abstract
PDF(3453KB) ( 70 )
- References | Related Articles | Metrics
-
Medical image segmentation has important application value in clinical diagnosis,treatment and pathological analysis.In recent years,denoising diffusion models have achieved remarkable success in image segmentation modeling,which can better capture complex structure and detail information in images.However,most of the methods using the denoising diffusion model for medical image segmentation ignore the boundary uncertainty and region ambiguity of the segmentation target,resulting in the instability and inaccuracy of the final segmentation results.In order to solve this problem,a medical image diffusion fusion network segmentation model driven based on fuzzy logic(FDiff-Fusion) is proposed.By integrating the denoising diffusion model into the classical U-Net network,this model can effectively extract rich semantic information from inputting medical images.Since the boundary uncertainty and region blurring of medical image segmentation are common,a fuzzy learning module is designed on the jump path of U-Net network.The module sets several fuzzy membership functions for the input encoded features to describe the similarity degree between the feature points,and applies fuzzy rules to the fuzzy membership functions,thus enhancing the modeling ability of the model to the uncertain boundary and fuzzy region.In addition,in order to improve the accuracy and robustness of the model segmentation results,a method based on iterative attention feature fusion is introduced in the test phase,which adds local context information to the global context information in the attention module to fuse the prediction results of each denoising time step.Experimental results show that compared with existing advanced segmentation networks,the average Dice score and the average HD95 distance obtained by FDiff-Fusion on BRATS 2020 brain tumor dataset are 84.16% and 2.473mm,respectively.The mean Dice score and the mean HD95 distance obtained on BTCV abdominal multi-organ dataset are 83.41% and 7.98mm,respectively,showing good segmentation performance.
-
Few-shot Insulator Defect Detection Based on Local and Global Feature Representation
崔克彬, 胡真真. 基于局部和全局特征表示的小样本绝缘子缺陷检测[J]. 计算机科学, 2025, 52(6): 286-296.
CUI Kebin, HU Zhenzhen. Few-shot Insulator Defect Detection Based on Local and Global Feature Representation[J]. Computer Science, 2025, 52(6): 286-296. - CUI Kebin, HU Zhenzhen
- Computer Science. 2025, 52 (6): 286-296. doi:10.11896/jsjkx.240300146
-
Abstract
PDF(4436KB) ( 56 )
- References | Related Articles | Metrics
-
In order to solve the problem that the small number of insulator defect samples and small defect targets lead to the current low accuracy of insulator defect detection,this paper proposes a few-shot object detection model(C-TFSIDD) combining CNN and Transformer,which fuses local and global features of images to realize insulator defect detection more effectively.Firstly,Next-ViT,which integratesthe local detail capture capability of CNN and the global information integration capability of Transformer,is used as the feature extraction module to accurately capture local and global feature information of insulator images.Secondly,the improved path aggregation feature pyramid network(PAFPN) is used for bidirectional multi-scale feature fusion to enhance the underlying feature representation and improve the detection effect of small targets.Finally,a metric-based discriminative loss is proposed to optimize the classifier in the fine-tuning stage to learn more discriminative feature representations to increase the separability between classes and reduce the effect of intra-class variations.Trained and evaluated on two public insulator defect datasets,the experimental results show that C-TFSIDD improves the detection results with samples of 5shot,10shot,and 20shot by 28.7%,35.5%,and 47.7%,respectively,compared to the baseline model TFA,and compared with the few-shot object detection model FSCE,the proposed method improved by 21.8%,26.7%,and 21.1%,respectively.The results show that C-TFSIDD can effectively improve the defect detection accuracy of few-shot insulator samples.
-
Edge and Color Information Guided High-resolution Low-light Image Enhancement Algorithm
张玲, 李振宇. 边缘和颜色信息引导下的高分辨率低光图像增强算法[J]. 计算机科学, 2025, 52(6): 297-305.
ZHANG Ling, LI Zhenyu. Edge and Color Information Guided High-resolution Low-light Image Enhancement Algorithm[J]. Computer Science, 2025, 52(6): 297-305. - ZHANG Ling, LI Zhenyu
- Computer Science. 2025, 52 (6): 297-305. doi:10.11896/jsjkx.240300004
-
Abstract
PDF(5720KB) ( 80 )
- References | Related Articles | Metrics
-
The ability of the device to capture high-resolution images poses a new challenge to image processing,and most of the existing low-light image enhancement algorithms are designed for low-resolution images,and there are problems such as unclear details and color distortion when dealing with high-resolution images.Using the texture information and color information contained in the image itself,an edge and color information guided high-resolution low-light image enhancement algorithm is proposed.To improve the limitation of local feature learning of convolutional neural network,an edge decoder is introduced,which helps to capture the key information in the image at a long distance and improves the encoding of semantic information at the boundary.In addition,in order to deal with high-resolution images,a sparse attention mechanism is introduced in the context attention blocks,which focuses on the important information in the image and effectively reduces noise interference.On the other hand,the color decoder effectively utilizes the chromaticity cues of the low-light image itself to improve the accuracy of color information recovery.
-
Multi-AGV Path Planning Algorithm Based on Improved DDPG
赵学健, 叶昊, 李豪, 孙知信. 基于改进DDPG的多AGV路径规划算法[J]. 计算机科学, 2025, 52(6): 306-315.
ZHAO Xuejian, YE Hao, LI Hao, SUN Zhixin. Multi-AGV Path Planning Algorithm Based on Improved DDPG[J]. Computer Science, 2025, 52(6): 306-315. - ZHAO Xuejian, YE Hao, LI Hao, SUN Zhixin
- Computer Science. 2025, 52 (6): 306-315. doi:10.11896/jsjkx.240500099
-
Abstract
PDF(3797KB) ( 72 )
- References | Related Articles | Metrics
-
In the field of intelligent logistics,the challenge of path planning and obstacle avoidance for automated guided vehicles(AGVs) is significant.Traditional deep reinforcement learning(DRL) methods exhibit limitations in efficiency,dynamic adaptability,and handling competitive-cooperative interactions among multiple AGVs.This paper presents the improved adaptive co-operative deep deterministic policy gradient(Improved-AC-DDPG) algorithm,an advancement over the standard DDPG.It leverages environmental data to construct state vectors and employs a real-time path planning strategy that dynamically creates task sequences to prevent AGV conflicts.This algorithm also includes continuous policy parameter optimization for obstacle avoidance.Experiments show that the Improved-AC-DDPG surpasses both the standard DDPG and the artificial potential field optimization DDPG(APF-DDPG) in convergence speed,obstacle avoidance,path planning,and energy efficiency,thus enhancing multi-AGV system performance.This study provides innovative insights and solutions for multi-agent system modeling and collaboration in dynamic environments,with substantial theoretical and practical implications.
-
Efficient Remote Sensing Common Product Production Algorithm Based on Product Reuse Model
左宪禹, 周小虎, 周黎明, 谢毅, 刘成. 一种基于产品复用模型的高效遥感共性产品生产算法[J]. 计算机科学, 2025, 52(6): 316-323.
ZUO Xianyu, ZHOU Xiaohu, ZHOU Liming, XIE Yi, LIU Cheng. Efficient Remote Sensing Common Product Production Algorithm Based on Product Reuse Model[J]. Computer Science, 2025, 52(6): 316-323. - ZUO Xianyu, ZHOU Xiaohu, ZHOU Liming, XIE Yi, LIU Cheng
- Computer Science. 2025, 52 (6): 316-323. doi:10.11896/jsjkx.240300019
-
Abstract
PDF(1979KB) ( 53 )
- References | Related Articles | Metrics
-
With the increasing demand for remote sensing common products in various industries,the application of high-perfor-mance remote sensing product production system is increasing.As a key component of the system,excellent task scheduling algorithm can significantly improve its production efficiency.However,there are unique challenges in the production process of remote sensing generic products.If a large number of workflows are submitted for production in a short time,there are problems of dou-ble calculation and data processing in the processing of these workflows,and the amount of data required to generate generic pro-ducts is often large,and the process processing time is long,which easily leads to resource waste and production efficiency decline.In order to solve this problem,this paper proposes a task division strategy based on product reuse model,which focuses on optimizing workflow processing.Firstly,workflow submitted by users is packaged into a process package according to task repetition,and processes with repetitive tasks are assigned to the same computing node to reduce the data transmission time between nodes.Then,a product reuse model is introduced to allow different processing processes to reuse the obtained product results,reduce repetitive calculation and data processing,so as to improve production efficiency and meet the high efficiency needs of common product production.In order to verify the effectiveness of the proposed algorithm,the proposed algorithm and other traditional algorithms FCFS and SJF are simulated in the CloudSim simulation simulator respectively.The results show that the proposed scheduling algorithm has significantly lower total task completion time and average task response time than the other two algorithms,showing better performance.
-
Research on Hybrid Retrieval-augmented Dual-tower Model
郜洪奎, 马瑞祥, 包骐豪, 夏少杰, 瞿崇晓. 基于混合检索增强的双塔模型研究[J]. 计算机科学, 2025, 52(6): 324-329.
GAO Hongkui, MA Ruixiang, BAO Qihao, XIA Shaojie, QU Chongxiao. Research on Hybrid Retrieval-augmented Dual-tower Model[J]. Computer Science, 2025, 52(6): 324-329. - GAO Hongkui, MA Ruixiang, BAO Qihao, XIA Shaojie, QU Chongxiao
- Computer Science. 2025, 52 (6): 324-329. doi:10.11896/jsjkx.240800017
-
Abstract
PDF(2085KB) ( 55 )
- References | Related Articles | Metrics
-
In the vanguard of knowledge retrieval,particularly in scenarios involving large language models(LLMs),research emphasis has shifted toward employing pure vector retrieval techniques for efficient capture of pertinent information.This information is then fed into large language models for comprehensive distillation and summarization.However,the limitations of this approach lie in its potential inability to fully encompass the intricacies of retrieval through vector representations alone,coupling with an absence of effective ranking mechanisms.This often leads to an overabundance of irrelevant information,thereby diluting the alignment between the final response and the user's actual needs.To address this conundrum,this paper introduces a hybrid retrieval-augmented dual-tower model.This model innovatively integrates a multi-path recall strategy,ensuring that the retrieval results are both comprehensive and highly relevant through complementary recall mechanisms.Architecturally,it adopts a dual-la-yer structure,combining bidirectional recurrent neural networks with text convolutional neural networks.This allows the model to perform multi-level ranking optimization on retrieval results,significantly enhancing the relevance and the precision of top-ranking outcomes.Moreover,the high-quality information,efficiently ranked,is integrated with the original query and fed into a large language model.This exploits the model's deep analytical capabilities to generate more accurate and credible responses.Experimental findings affirm that the proposed method effectively improves retrieval accuracy and system performance overall,markedly enhancing the precision and practicality of large language models in real-world applications.
-
Study on Text Component Recognition of Narrative Texts Based on Prompt Learning
王晓艺, 王炯, 刘杰, 周建设. 基于提示学习的记叙文篇章成分识别研究[J]. 计算机科学, 2025, 52(6): 330-335.
WANG Xiaoyi, WANG Jiong, LIU Jie, ZHOU Jianshe. Study on Text Component Recognition of Narrative Texts Based on Prompt Learning[J]. Computer Science, 2025, 52(6): 330-335. - WANG Xiaoyi, WANG Jiong, LIU Jie, ZHOU Jianshe
- Computer Science. 2025, 52 (6): 330-335. doi:10.11896/jsjkx.240400043
-
Abstract
PDF(2009KB) ( 51 )
- References | Related Articles | Metrics
-
Text structure analysis is one of the important techniques in automated essay scoring and an important research topic in the field of natural language processing.In recent years,research on the analysis of essay structure has been scarce and mainly focused on argumentative essays.There are still shortcomings in the study of narrative texts,especially in terms of research me-thods and resources,which are relatively limited.In response to these issues,this paper constructs a corpus for identifying the components of narrative texts in primary and secondary schools.A corpus automatic annotation model based on BERT-BiLSTM is used to improve annotation efficiency,and statistical analysis is conducted on content distribution and consistency of corpus annotation.This paper proposes a narrative text component recognition method based on prompt learning,which automatically constructs prefix prompt templates for recognizing text components and utilizes hierarchical attention mechanism to learn richer text features,thereby improving the ability to recognize narrative text structure.Experiments are conducted on a self-built dataset,and the results show that the proposed method improves the accuracy of narrative discourse structure to 85.80%,which is superior to the pre-trained language models used for comparison.
-
Resource Allocation Method with Workload-time Windows for Serverless Applications inCloud-edge Collaborative Environment
张铭豪, 肖博怀, 郑松, 陈星. 云边协同环境下面向负载时间窗口的无服务器应用资源分配方法[J]. 计算机科学, 2025, 52(6): 336-345.
ZHANG Minghao, XIAO Bohuai, ZHENG Song, CHEN Xing. Resource Allocation Method with Workload-time Windows for Serverless Applications inCloud-edge Collaborative Environment[J]. Computer Science, 2025, 52(6): 336-345. - ZHANG Minghao, XIAO Bohuai, ZHENG Song, CHEN Xing
- Computer Science. 2025, 52 (6): 336-345. doi:10.11896/jsjkx.240400073
-
Abstract
PDF(3406KB) ( 59 )
- References | Related Articles | Metrics
-
With the increasingly diverse computational demands in the cloud-edge collaborative environment,the traditional computing architecture based on virtual machines as the smallest unit of resources exhibits inflexibility and low cost-effectiveness.Serverless computing,as an emerging computing architecture with excellent scalability and flexibility,provides a new perspective to address these issues.In response to the resource allocation problem with workload-time windows for serverless applications in cloud-edge collaborative environments,this study proposes a resource allocation method based on rule-driven co-evolution algorithm(RARCA).This method considers the workload at a certain resource adjustment moment and in the foreseeable future,employing a rule-driven distributed resource updating mechanism to achieve dynamic allocation and adjustment of computing resources.Additionally,by leveraging the information sharing and cooperative optimization capabilities of the co-evolution mechanism,the algorithm efficiently searches for globally optimal resource allocation solutions,significantly improving the real-time and effectiveness of the overall resource allocation method.Experimental results demonstrate that RARCA can achieve superior resource allocation solutions with decision times in seconds,outperforming baseline methods by 2.8% to 14.5% in resource allocation performance.
-
Customized Container Scheduling Strategy Based on GMM
周凯, 王凯, 朱宇航, 普黎明, 刘树新, 周德强. 基于GMM的容器定制化调度策略[J]. 计算机科学, 2025, 52(6): 346-354.
ZHOU Kai, WANG Kai, ZHU Yuhang, PU Liming, LIU Shuxin, ZHOU Deqiang. Customized Container Scheduling Strategy Based on GMM[J]. Computer Science, 2025, 52(6): 346-354. - ZHOU Kai, WANG Kai, ZHU Yuhang, PU Liming, LIU Shuxin, ZHOU Deqiang
- Computer Science. 2025, 52 (6): 346-354. doi:10.11896/jsjkx.240900154
-
Abstract
PDF(3560KB) ( 75 )
- References | Related Articles | Metrics
-
In cloud computing environments,as the number and types of containers continue to increase,resource management and scheduling complexity are increased.How to effectively schedule containers and optimize resource utilization and cluster perfor-mance has become an important research topic.The existing container cluster scheduling strategies do not fully consider the diverseneeds of containers,lack flexibility,and are difficult to customize scheduling for containers in different scenarios.This can easily lead to problems such as low cluster resource utilization and imbalanced cluster resource load.In order to meet the diverse needs of containers and improve the load balancing of cluster resources,this paper proposes a customized container scheduling strategy based on GMM(Gaussian Mixture Model).Firstly,classify according to the resources and attribute requirements of the container,and divide it into different types.Secondly,for each type of container,different independent weights are calculated and assigned separately,and the containers are scheduled to appropriate nodes according to their types in turn,thereby achieving customized scheduling.In this way,the diverse needs of containers are met,so that different types of containers can get the optimal resource allocation according to their specific needs,avoiding resource competition and conflicts,thereby improving the overall utilization and load balancing of cluster resources.Experimental results show that compared with Kubernetes Scheduler,this scheduling strategy has shown superior performance in various container scheduling scenarios,with the maximum resource utilization diffe-rence between cluster nodes reduced by 17.1%,the container scheduling success rate increased by 19%,and the cluster node load balancing increased by 57.51%.
-
Time-constrained Mobile Charging Scheduling for Heterogeneous Sensing in Wireless Rechargeable Sensor Networks
李德强, 任新一, 徐佳. 无线可充电传感器网络中异构感知的限时移动充电调度[J]. 计算机科学, 2025, 52(6): 355-364.
LI Deqiang, REN Xinyi, XU Jia. Time-constrained Mobile Charging Scheduling for Heterogeneous Sensing in Wireless Rechargeable Sensor Networks[J]. Computer Science, 2025, 52(6): 355-364. - LI Deqiang, REN Xinyi, XU Jia
- Computer Science. 2025, 52 (6): 355-364. doi:10.11896/jsjkx.240400186
-
Abstract
PDF(2533KB) ( 60 )
- References | Related Articles | Metrics
-
Wireless Sensor Networks(WSNs) are widely deployed in various applications,including military surveillance,disaster prediction,and hazardous environment exploration.However,the limited lifespan of wireless sensors necessitates frequent battery replacements,leading to high maintenance costs and significant inconvenience.In recent years,with the advent of wireless power transmission technology,wireless rechargeable sensor networks(WRSNs) have been developed to address these issues,providing new avenues for research.Nonetheless,existing studies typically prioritize charging capacities,underestimating the urgency and heterogeneity of sensors in emergency scheduling.Formally,this paper treats the scheduling task as a constrained optimization problem with the aim to maximizing the monitoring utility for heterogeneous sensors,which has been proven to be NP-hard.Therefore,it converts the problem to sub-modular maximization through the discretization of charging time.This naturally leads to develop approximate algorithms based on a greedy strategy,with theoretical backing for the approximation ratio to the optimal value.Extensive experiments demonstrate that the proposed algorithms can significantly enhance monitoring utility,with the highest improvement reaching 279.79% compared to the classical NJNP algorithm.
-
Survey of Binary Code Similarity Detection Method
魏有缘, 宋建华, 张龑. 二进制代码相似性检测方法综述[J]. 计算机科学, 2025, 52(6): 365-380.
WEI Youyuan, SONG Jianhua, ZHANG Yan. Survey of Binary Code Similarity Detection Method[J]. Computer Science, 2025, 52(6): 365-380. - WEI Youyuan, SONG Jianhua, ZHANG Yan
- Computer Science. 2025, 52 (6): 365-380. doi:10.11896/jsjkx.240400003
-
Abstract
PDF(2366KB) ( 71 )
- References | Related Articles | Metrics
-
Code similarity detection can be divided into two types according to the research object:source code similarity detection and binary code similarity detection,which are commonly used in scenarios such as malicious code identification,vulnerability search,and copyright protection.Based on the current domestic Internet environment,programs are usually released in the form of binary files,and most programs cannot directly obtain source code.Therefore,in related research in the field of software security,the application scope of binary code similarity detection is relatively wider.Starting from the definition and implementation process of binary code similarity detection,according to the code representation form,it is divided into three categories:text cha-racter-based,code embedding-based,and graph embedding-based.The classic binary code similarity detection methods and the recent five years of research and development are compared.A total of 19 documents on new methods are sorted out,and various methods are analyzed and summarized based on multi-architecture,Baseline,benchmark datasets and detection performance.Finally,current problems and possible future research directions are analyzed based on the development of new methods.
-
Balancing Transferability and Imperceptibility for Adversarial Attacks
康凯, 王家宝, 徐堃. 平衡可迁移与不可察觉的对抗攻击[J]. 计算机科学, 2025, 52(6): 381-389.
KANG Kai, WANG Jiabao, XU Kun. Balancing Transferability and Imperceptibility for Adversarial Attacks[J]. Computer Science, 2025, 52(6): 381-389. - KANG Kai, WANG Jiabao, XU Kun
- Computer Science. 2025, 52 (6): 381-389. doi:10.11896/jsjkx.240300083
-
Abstract
PDF(3256KB) ( 67 )
- References | Related Articles | Metrics
-
Data-driven deep learning models face the problem of well-designed adversarial attacks due to their inability to cover all possible sample data.The existing main Lp-norm perturbation attack methods based on RGB pixel space have achieved great attack success rates and transferability,but the generated adversarial samples have high-frequency noise that is easily perceived by the human eye.The attack methods based on diffusion models balance transferability and imperceptibility,but their optimization strategies mainly focus on the perspective of adversarial models.Those researches lack deep exploration and analysis of transfer-ability and imperceptibility from the perspective of surrogate model.In order to further explore and analyze the control sources of transferability and imperceptibility,a new adversarial sample generation method based on latent diffusion model is proposed within the framework of an attack method based on surrogate model.In this method,under the constraint of basic adversarial loss,transferable attention constraint loss and imperceptible consistency constraint loss are designed to achieve a balance between transferability and imperceptibility.On three publicly available datasets,ImageNet Compatible,CUB-200-2011,and Stanford Cars,compared with existing methods,the proposed method generates adversarial samples with strong cross-model transferable attack ability and the effect of imperceptible disturbance to the human eye.
-
Performance Optimization Method for Domestic Cryptographic Algorithm SM9
谢振杰, 刘奕明, 蔡瑞杰, 罗友强. 国密算法SM9的性能优化方法[J]. 计算机科学, 2025, 52(6): 390-396.
XIE Zhenjie, LIU Yiming, CAI Ruijie, LUO Youqiang. Performance Optimization Method for Domestic Cryptographic Algorithm SM9[J]. Computer Science, 2025, 52(6): 390-396. - XIE Zhenjie, LIU Yiming, CAI Ruijie, LUO Youqiang
- Computer Science. 2025, 52 (6): 390-396. doi:10.11896/jsjkx.240300141
-
Abstract
PDF(2048KB) ( 44 )
- References | Related Articles | Metrics
-
To address the challenge of computational performance optimization in the domestic cryptographic algorithm SM9,a suite of performance enhancement techniques has been developed and applied.These methods include fixed-point scalar multiplication precomputation on elliptic curves,an improved Miller algorithm with precomputation,an optimized construction for the hard part of final exponentiation,modular exponentiation within the cyclotomic subgroup,and modular exponentiation employing a Comb-based fixed-base strategy.Through these tailored approaches,significant enhancements have been achieved in the computation of the SM9 algorithm,especially in the time-consuming steps,such as scalar multiplication on elliptic curves,bilinear pairing,and modular exponentiationin the 12th extension field.The seven fundamental SM9 algorithms,encompassing digital signature generation and verification,key exchange,key encapsulation and decapsulation,as well as encryption and decryption,have been effectively implemented in Python.Comprehensive testing reveals that the integration of these optimization techniques yields performance improvements ranging from 32% to 352% for the SM9 algorithms,marking a substantial advance in their computational efficiency.
-
Study on Efficacy Mechanism for IoT Data Flow Threats
孙瑞杰, 李鹏, 朱枫. 物联网数据流威胁致效机理研究[J]. 计算机科学, 2025, 52(6): 397-404.
SUN Ruijie, LI Peng, ZHU Feng. Study on Efficacy Mechanism for IoT Data Flow Threats[J]. Computer Science, 2025, 52(6): 397-404. - SUN Ruijie, LI Peng, ZHU Feng
- Computer Science. 2025, 52 (6): 397-404. doi:10.11896/jsjkx.240400133
-
Abstract
PDF(2225KB) ( 63 )
- References | Related Articles | Metrics
-
With the explosive growth in the number of IoT devices,the means of attacking IoT devices have also become diverse and covert.Machine learning-based detection methods have been actively researched and shown great potential.However,these models are considered black boxes,making it difficult to explain their classification results and thus unable to explain the specific means and patterns of IoT threats.To address this issue,this paper constructs a technology-feature dictionary based on ATT&CK framework,characterizing attack techniques with traffic features,and builds a threat-technology database,decomposing network threats into the level of attack techniques.This paper designs a threat detection model based on an efficacy mechanism,constructs a real-time traffic feature matrix,summarizes the attack techniques suffered by the traffic,and inputs the technical sequence into the threat-technology database to obtain the possible threats and their probabilities.Experimental results show that the proposed model achieves a threat detection rate as high as 99.595% in the dataset,which is compared to traditional methods.Moreover,it can adjust the false positive rate according to the experimental environment and provides reliable attack path explanations for analysts.
-
Adversarial Face Privacy Protection Based on Makeup Style Patch Activation
袁霖, 黄令, 郝凯乐, 张家伟, 朱明瑞, 王楠楠, 高新波. 基于妆容风格补丁激活的对抗性人脸隐私保护[J]. 计算机科学, 2025, 52(6): 405-413.
YUAN Lin, HUANG Ling, HAO Kaile, ZHANG Jiawei, ZHU Mingrui, WANG Nannan, GAO Xinbo. Adversarial Face Privacy Protection Based on Makeup Style Patch Activation[J]. Computer Science, 2025, 52(6): 405-413. - YUAN Lin, HUANG Ling, HAO Kaile, ZHANG Jiawei, ZHU Mingrui, WANG Nannan, GAO Xinbo
- Computer Science. 2025, 52 (6): 405-413. doi:10.11896/jsjkx.241200001
-
Abstract
PDF(4517KB) ( 71 )
- References | Related Articles | Metrics
-
Facial recognition technology has developed rapidly,greatly facilitating people's lives,but it has also raised public concerns about personal privacy.Facial images shared by people through social media and the Internet may be collected by illegal organizations,which can use facial recognition systems to identify the identityand steal privacy information related to the users.Therefore,a privacy protection mechanism is needed to ensure that facial images published by users through public media can be viewed normally by people,but can prevent facial recognition systems from extracting accurate identity information.The mainstream adversarial sample-based methods can solve this problem to some extents,but they inevitably introduce noise that can be easily detected in the images.When people share personal photos on social media and other platforms,they often add some beauty effects.Therefore,embedding adversarial perturbations cleverly while adding beautification effects to the images to achieve identity privacy protection for the images is a win-win choice.In this regard,this paper proposes a facial image identity privacy protection method based on makeup style patch activation.This method activates the makeup style of the reference facial image into the features of the original facial image through feature patches,and then reconstructs the activated features into adversarial facial images with makeup.At the same time,it uses an identity privacy enhancement module to force the generated image's identity features to approach a target identity,thereby obtaining adversarial privacy protection capabilities.Experimental results show that the facial images generated by this method not only have better visual effects and a variety of makeup styles,but also can effectively defend against privacy infringement caused by various black-box facial recognition models.