Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 52 Issue 4, 15 April 2025
  
Smart Embedded Systems
Perface of Special Issue of Smart Embedded Systems
Computer Science. 2025, 52 (4): 1-3.  doi:10.11896/jsjkx.qy20250401
Abstract PDF(1177KB) ( 183 )   
Related Articles | Metrics
Survey of Sensor Attack Defense Strategies for Cyber Physical Systems
CHEN Yanfeng, FENG Zhiwei, DENG Qingxu, WANG Yan
Computer Science. 2025, 52 (4): 4-13.  doi:10.11896/jsjkx.241000138
Abstract PDF(1807KB) ( 198 )   
References | Related Articles | Metrics
Cyber-physical system(CPS),as an intelligent system integrating computation,communication and control,plays an increasingly important role in various fields such as intelligent transportation and healthcare.Sensors play a crucial role in CPS but are also commonly targeted by attackers.Firstly,the scope of research on sensor attack defense is clarified,and the relevant stu-dies on sensor attacks are categorized into attack prevention,defense and recovery according to the timing of attack occurrence.Next,the types and impacts of sensor attacks are reviewed,including DoS attacks,replay attacks and deception attacks.Then,sensor attack detection methods based on multi-source consistency,historical consistency and response consistency are summarized.Subsequently,data fusion methods after attack detection are discussed,including Kalman filter-based and interval-based data fusion methods.Finally,potential future research directions are explored to further enhance the defense capabilities against sensor attacks in CPS.
Prospects for the Development of Information System Architecture--Taking the National Natural Science Foundation’s Information Systemfor Example
YAO Chang, HAO Yanni, PENG Shenghui, NIU Zhiang, XIE Yong, ZHAO Shizhen
Computer Science. 2025, 52 (4): 14-20.  doi:10.11896/jsjkx.240900144
Abstract PDF(3644KB) ( 155 )   
References | Related Articles | Metrics
In recent years,with the widespread adoption and rapid development of technologies such as the Internet,big data and artificial intelligence,scientific research management is gradually shifting from traditional models to new paradigms driven by data,informatization and intelligence.This transformation has made traditional information system architectures increasingly inadequate to cope with the demands of massive data exchange,data sharing and data security.This paper takes the information system architecture of the National Natural Science Foundation of China(hereinafter referred to as “the NSFC”) as an example to explore its development and evolution directions.Faced with the advancement of science fund reforms in the new era,the existing information systems of the NSFC are facing numerous challenges in terms of network and data security,enhancement of intelligent service levels,and optimization of system development and management efficiency.This paper first introduces the current application status of two traditional information system architectures in the NSFC,and then conducts an in-depth analysis of the challenges faced by the existing information systems from four aspects:the growth of business volume and the number of bu-siness systems,the replacement of information technology with domestic innovation,intelligent services,and data management.Finally,it provides thoughts on the subsequent development of the information system architecture from two dimensions:data ma-nagement and microservices.
Dynamic Conflict-Prediction Based Algorithm for Multi-agent Path Finding
ZHANG Mengxi, HAN Jianjun, XIAO Yan
Computer Science. 2025, 52 (4): 21-32.  doi:10.11896/jsjkx.241000101
Abstract PDF(3269KB) ( 221 )   
References | Related Articles | Metrics
Multi-agent path finding(MAPF) is the problem of searching for collision-free paths for a group of agents.Currently,flexible explicit estimation conflict-based search algorithm(FEECBS) is deemed as one of the most effective bounded suboptimal algorithms to solve the MAPF problem,but it has several disadvantages concerning the frequent invocation of the low-level algorithm and slow reduction in the number of conflicts during iterations.To address such issues,This paper proposes a dynamic conflict-prediction based algorithm for multi-agent path finding(DCPB-MAPF).The DCPB-MAPF algorithm operates on a two-layer framework.On the low level,it investigates an optimized dynamic obstacle avoidance method based on critical intervals and a new iterative method based on path cost prediction for improving its efficiency.On the high level,it develops a conflict-prediction based search method together with the low-level algorithm.Firstly,the conflict selection technique is improved by quickly predicting the number of potential collisions.Further,a new method is proposed to accelerate reducing the number of conflicts by constructing a heuristic function.The extensively empirical experiment results demonstrate that DCPB-MAPF can effectively improve both efficiency and success rate on MAPF instances when compared to the existing algorithms.
Research on Dynamic Redundancy Reliability Mechanisms Based on Multi-core HeterogeneousOperating Systems
HE Ruiqi, ZHANG Kailong, WU Jinfei, YU Qiang, ZHANG Jiaming
Computer Science. 2025, 52 (4): 33-39.  doi:10.11896/jsjkx.241100020
Abstract PDF(1745KB) ( 112 )   
References | Related Articles | Metrics
In response to the hybrid deployment requirements and functional safety needs of current embedded systems,this paper proposes a dynamic heterogeneous redundant operating system architecture,DHR-OS.Designed for hybrid deployment,the architecture features a mixed deployment model of heterogeneous operating systems,where Linux serves as the primary operating system on a multi-core CPU,while RTOS is dynamically deployed from the operating system image.To facilitate collaboration between operating systems,communication between the master and slave operating systems is implemented using OpenAMP.Furthermore,based on OpenAMP,mechanisms for time-division multiplexing of device drivers,remote RPC calls,and interrupt forwarding routing are established.To address functional safety requirements,the architecture includes a critical task safety execution mechanism that integrates scheduling,dispatching,and adjudication.Specifically,the Linux operating system pre-processes a pool of RTOS cores.When executing critical tasks,several RTOS cores are scheduled from this pool to serve as the task execution environment.The adjudicator on the Linux side processes the results returned by the RTOS core tasks using a distributed consensus algorithm based on weighted voting.This design enhances the system’s flexibility and resilience against attacks,providing a novel architectural solution to the hybrid deployment and functional safety needs of embedded systems,with significant innovation and practical value.
Autonomous Obstacle Avoidance Method for Unmanned Surface Vehicles Based on ImprovedProximal Policy Optimization
KONG Chao, WANG Wei, HUANG Subin, ZHANG Yi, MENG Dan
Computer Science. 2025, 52 (4): 40-48.  doi:10.11896/jsjkx.241000084
Abstract PDF(3257KB) ( 159 )   
References | Related Articles | Metrics
Autonomous obstacle avoidance has become a critical challenge for expanding the application scenarios of unmanned surface vehicles(USVs).Traditional methods for USVs obstacle avoidance mainly rely on fine-grained environmental modeling.However,in complex marine environments,USVs have difficulty obtaining complete perception states,leading to insufficient model accuracy.To address this issue,we propose an improved proximal policy optimization(PPO)-based autonomous obstacle avoidance method for USVs.First,a perception and decision framework for USVs based on Markov decision process is constructed.Then,a feature-sharing representation optimization module is designed by fusing recurrent neural networks to enhance the USV’s memory ability for temporal environmental perception.Finally,an autonomous obstacle avoidance reward function is designed by combining reward reshaping mechanisms to improve the optimization speed of the USV obstacle avoidance strategy.To verify the effectiveness of the proposed algorithm,a typical USV autonomous obstacle avoidance algorithm verification scenario is constructed on a three-dimensional simulation platform.Experimental results show that the improved PPO-based method can achieve collision-free autonomous navigation for USVs and outperforms the traditional PPO algorithm in terms of model convergence speed,collision rate,and timeout rate.
New Decomposition Method for Cyber-Physical Systems Based on Interpreted Petri Nets
CHEN Yuhao, TU Hanqian, XIANG Dongming
Computer Science. 2025, 52 (4): 49-53.  doi:10.11896/jsjkx.241000103
Abstract PDF(2126KB) ( 127 )   
References | Related Articles | Metrics
Petri nets are widely used for the modeling and analysis of Cyber-Physical Systems(CPS),which typically involve multiple concurrent tasks.To simplify the implementation of CPS,these systems can be decomposed into several independent components.Existing CPS decomposition methods,such as algorithms based on integer linear algebra,suffer from high time complexity,while decomposition methods that rely on monitors incur significant communication overhead.To address these issues,this paper integrates the advantages of existing decomposition approaches and proposes a novel CPS decomposition method based on Interpreted Petri Nets(IPN).The proposed method incrementally decomposes the network using constraint conditions,generating independent State Machine Components(SMCs) to effectively reduce the model size.Additionally,a new signal synchronization mechanism is introduced to replace traditional monitor-based schemes,significantly reducing synchronization overhead.Experimental results demonstrate that the proposed method achieves a decomposition time complexity of O(n2) in most test cases,which is far superior to the exponential complexity O(2n) of traditional methods.Furthermore,the generated component set is more compact and efficient.
Selection Method for Cloud Manufacturing Industrial Services Based on Generative AdversarialNetworks
ZHENG Xiubao, LI Jing, ZHU Ming, NING Yingying
Computer Science. 2025, 52 (4): 54-63.  doi:10.11896/jsjkx.241000102
Abstract PDF(4034KB) ( 121 )   
References | Related Articles | Metrics
With the deep integration of information technology and manufacturing technology,cloud manufacturing industrial production has become a key part of the manufacturing industry.Due to the dynamics of the cloud manufacturing environment and the interdependencies between service resources,it isn’t easy to select the best industrial resource services.Most of the existing selection optimization methods are based on heuristic algorithms,but these algorithms cannot often adapt to the cloud manufacturing environment.Therefore,this paper constructs a service selection model in the cloud manufacturing environment,proposes a service selection algorithm based on deep learning and generative adversarial network ideas,which can flexibly adapt to environmental changes,uses the graph representation learning method to construct a task service constraint graph,and then learns the characteristics of resource services according to the intrinsic relationship between tasks,services,and industrial production constraints,and introduces gradient optimization and loss function strategies in the algorithm improvement stag and select the best industrial resource services.Experiments show that the proposed algorithm has stronger performance advantages than other comparison algorithms.
Study on Lightweight Flame Detection Algorithm with Progressive Adaptive Feature Fusion
LI Xiaolan, MA Yong
Computer Science. 2025, 52 (4): 64-73.  doi:10.11896/jsjkx.241000093
Abstract PDF(3080KB) ( 102 )   
References | Related Articles | Metrics
In response to the challenge of balancing accuracy and real-time performance when deploying flame detection models on edge computing platforms for visual security systems,a lightweight flame detection algorithm featuring progressive adaptive feature fusion is proposed.Firstly,a lightweight sparse convolution operator is designed to reduce the model’s computational complexity and memory access cost.Subsequently,to address the shortcomings of inter-channel information exchange in grouped convolutions,a lightweight feature extraction component is constructed based on the residual concept,enhancing long-distance contextual features.To tackle the issues of feature loss and background interference in deep backbone networks,an innovative lightweight feature enhancement mechanism based on high-frequency augmentation is proposed,optimizing the parameters in both spatial and channel domains to mitigate background disturbances.On this basis,a feature enhancement-progressive adaptive feature fusion framework is established to facilitate the thorough integration of feature maps at different scales,thereby improving the utilization of feature maps and enhancing the recognition effectiveness of multi-scale targets.Experimental results demonstrate that this method achieves a real-time inference speed of up to 27.1 FPS,reduces the parameter count to 2.1 M,which is a 69.5% reduction compared to the baseline model,and attains a detection accuracy of 83.4% mAP@0.5,significantly outperforming existing mainstream methods.
Joint Optimization of UAV Trajectories and Computational Offloading for Space-Air-GroundIntegrated Networks
CHEN Yitian, TONG Yinghua
Computer Science. 2025, 52 (4): 74-84.  doi:10.11896/jsjkx.241000098
Abstract PDF(3369KB) ( 127 )   
References | Related Articles | Metrics
As an emerging network architecture,space-air-ground integrated network has attracted significant attention from researchers in recent years,and it can greatly improve the overall quality of service.Addressing the challenges of insufficient network coverage and the lack of basic infrastructure in remote areas,a space-air-ground integrated network framework is proposed in which unmanned aerial vehicles(UAVs) and satellites collaboratively collect tasks.In this framework,UAVs and satellites provide edge computing services for ground sensors,while cloud servers deliver cloud services.Given that UAV coverage,task completion rate,and task latency are critical factors influencing system performance,this study jointly optimizes UAV trajectory and computation offloading to maximize UAV coverage and task completion rate while minimizing latency.The proposed joint optimization problem is formulated as a mixed-integer nonlinear programming problem,therefore,a dual-layer optimization algorithm based on the Beluga Whale Optimization and Sand Cat Swarm Optimization is developed,with the two layers separately optimizing UAV trajectory and computation offloading.Experimental results show that the proposed algorithm significantly improves the coverage rate of multiple UAVs,effectively enhances the task completion rate,and reduces average task latency in computation offloading.
Stochastic Optimization Method for Multi-exit Deep Neural Networks for Edge Intelligence Applications
LI Zhoucheng, ZHANG Yi, SUN Jin
Computer Science. 2025, 52 (4): 85-93.  doi:10.11896/jsjkx.241000097
Abstract PDF(2578KB) ( 87 )   
References | Related Articles | Metrics
As a novel intelligent computing paradigm,edge intelligence can effectively enhance the response speed of intelligent inference tasks on embedded edge devices.Age of information(AoI),an important metric for measuring data freshness,is of great significance to the computing resource overhead and real-time response of edge intelligence applications.This work studies the resource allocation optimization problem for multi-exit deep neural networks(DNNs) that takes into account the uncertainty of AoI caused by exit probabilities and introduces a probabilistic constraint on system AoI.The stochastic optimization theory is incorporated to make decision on the most appropriate exit configuration for the purpose of minimizing the resource overhead of multi-exit DNNs.A cuckoo search-based metaheuristic algorithm is proposed to solve the stochastic optimization problem with the probabilistic AoI constraint.The metaheuristic predicts the statistical distribution of system AoI based on the exit probabilities,calculates the resource consumption according to a specified AoI threshold and uses it as the fitness value of the corresponding cuckoo individual,and iteratively updates the cuckoo population to explore the exit configuration solution leading to the lowest computing resource overhead.Experimental results on various DNN models show that compared with deterministic optimization methods,the stochastic optimization approach can produce better exit configuration solutions,significantly reducing resource overhead while satisfying the probabilistic AoI constraint.
Efficient Adaptive CNN Accelerator for Resource-limited Chips
PANG Mingyi, WEI Xianglin, ZHANG Yunxiang, WANG Bin, ZHUANG Jianjun
Computer Science. 2025, 52 (4): 94-100.  doi:10.11896/jsjkx.241000099
Abstract PDF(2123KB) ( 90 )   
References | Related Articles | Metrics
This paper proposes an adaptive convolutional neural network accelerator(ACNNA) for non-GPU chips with limited resources,which can adaptively generate hardware accelerators based on resource constraints of hardware platform and convolutional neural network structures.Through its reconfigurable feature,ACNNA can effectively accelerate various layer combinations including convolutional layers,pooling layers,activation layers,and fully connected layers.Firstly,a resource folding multi-channel processing engine(PE) array is designed,which folds the idealized convolutional structure to save resources and unfolds on the output channel to support parallel computing.Secondly,multi-level storage and ping-pong caching mechanisms are adopted to optimize the pipeline,effectively improving data processing efficiency.Then,a resource reuse strategy under multi-level storage is proposed,which combined with the design space exploration algorithm can more reasonably schedule hardware resource allocation for network parameters,so that low resource chips can deploy deeper and more parameterized network models.Taking LeNet5 and VGG16 network models as examples,this paper validate ACNNA on the Ultra96 V2 development board.The results show that the ACNNA deployment of VGG16 consumes only 4% of resources of original network.At 100MHz main frequency,LeNet5 accelerator achieves a computing rate of 0.37 GFLOPS with a power consumption of 2.05W; VGG16 accelerator has a computing speed of 1.55 GFLOPS at a power consumption of 2.132W.Compared with existing work,ACNNA increases Frames Per Second(FPS) by over 83%.
Weakly-hard-constraintand Priority-distance Aware Partitioned Scheduling for HomogeneousMulticore Platforms
GONG Weiqiang, HAN Jianjun, ZHANG Chang’an
Computer Science. 2025, 52 (4): 101-109.  doi:10.11896/jsjkx.241000100
Abstract PDF(2600KB) ( 84 )   
References | Related Articles | Metrics
As weakly-hard real-time(WHRT) system enables to effectively exploit computing resources while guaranteeing the system stability by tolerating accidentally temporal violation,it has been broadly developed in some real-life areas over the past two decades.However,there exist rather limited studies on the scheduling of WHRT tasks upon multicores.For the existing global scheduling based schemes,the high runtime overhead caused by task migration greatly restricts their practical viability; while the current job-level partitioned algorithm usually ignores the impacts of the utilizations of tasks under weakly-hard constraints,which may significantly degrade the schedulability performance of task sets.To address such issues,with the focus global emergency-based scheduling(GEBS) for uniprocessors,two task partitioning algorithms have been proposed:the weakly-hard-constraint aware task partitioning algorithm(WHCA-TPA) and the priority-distance aware task partitioning algorithm(PDA-TPA).Firstly,WHCA-TPA considers the interference between different tasks,providing a more reasonable estimate of system utilization,and uses it to allocate tasks more reasonably.In addition,PDA-TPA aims at reducing the number of preemptions between tasks to decrease the number of context switches,thus achieving lower runtime overhead.Ultimately,when compared to the existing conventional mapping schemes,extensive experimental results show that WHCA-TPA can achieve a higher schedulability ratio under various system parameters.Meanwhile,PDA-TPA and WHCA-TPA usually has lower runtime overhead in contrast with other mapping schemes.
Database & Big Data & Data Science
SCFNet:Fusion Framework of External Spatial Features for Spatio-temporal Prediction
LIU Tengfei, CHEN Liyue, FANG Jiangyi, WANG Leye
Computer Science. 2025, 52 (4): 110-118.  doi:10.11896/jsjkx.241000094
Abstract PDF(3702KB) ( 105 )   
References | Related Articles | Metrics
Road information is closely related to the current traffic pattern of roads.Rich POI semantics can reveal the attributes of an area.Demographic data reveals the trend of population flow in an area.Considering the influence brought by the above external spatial features on the flow in spatio-temporal prediction can help the model accomplish more accurate prediction.Existing external spatial modeling methods usually focus on the input external spatial features,learn spatially relevant semantic representations through neural network mapping,and then fuse them with the final spatiotemporal flow representations.However,due to the heterogeneity between flow representations and spatial features,the existing external spatial feature modeling methods are often not highly scalable and can only target specific external spatial features or specific spatio-temporal models.To overcome the above problems,we propose a spatial context fusion network for traffic forecasting(SCFNet).Specifically,we introduce an attention mechanism based on information interaction to compute attention scores between spatio-temporal representations and external spatial features to achieve an efficient fusion of external spatial features and spatio-temporal representations,and we design a dynamic encoding method of time vectors to generate dynamic spatial feature semantics.SCFNet supports a mixture of different spatial static features such as regional demographic data,road information,and POI inputs.We conduct experiments on three real traffic datasets and demonstrated that SCFNet can significantly improve the prediction accuracy of various state-of-the-art spatiotemporal prediction methods such as(MTGNN,ASTGCN,and GraphWaveNet).
Joint Inter-word and Inter-sentence Multi-relationship Modeling for Review-basedRecommendation Algorithm
DENG Ceyu, LI Duantengchuan, HU Yiren, WANG Xiaoguang, LI Zhifei
Computer Science. 2025, 52 (4): 119-128.  doi:10.11896/jsjkx.240700053
Abstract PDF(3342KB) ( 87 )   
References | Related Articles | Metrics
Reviews,a prevalent form of auxiliary information,directly reflect user preferences and item characteristics,extensively utilized by researchers to enhance the predictive accuracy of recommendation algorithms.However,the current review recommendation algorithm still has shortcomings,which are mainly reflected in the fact that the existing model ignores the modeling of multi-granularity feature extraction of review text and the relational interactive learning of user preferences and item attributes,a pair of heterogeneous features.This oversight leads to insufficient extraction of review information,compromising model accuracy.Thus,joint inter-word and inter-sentence multi-relationship modeling for review-based recommendation algorithm(MR4R) is introduced in the study.Firstly,multi-relational modeling strategy is adopted to analyze inter-word and inter-sentence relationships in review texts to extract layered feature information,thereby enriching the model’s grasp of user preferences and refining item attribute representations.The fusion and prediction layer is designed to optimize the correlation mining process between user preferences and item attributes,and the score prediction is carried out by high order nonlinear calculation.The proposed model is compared with seven current mainstream recommendation algorithms on four distinct datasets.The results demonstrate that the recommendation algorithm,which incorporates multi-relational modeling between words and sentences,effectively extracts information embedded in reviews,significantly enhancing average recommendation accuracy and exhibiting superior performance.
Deep Clustering Method Based on Dual-branch Wavelet Convolutional Autoencoder and DataAugmentation
AN Rui, LU Jin, YANG Jingjing
Computer Science. 2025, 52 (4): 129-137.  doi:10.11896/jsjkx.240100111
Abstract PDF(4142KB) ( 94 )   
References | Related Articles | Metrics
Deep clustering based on autoencoders is a representative algorithm for unsupervised learning.It has gained much attention in the field of computer vision in recent years.Compared to traditional algorithms,the compact representation space provided by the hidden layers of autoencoders offers a more flexible condition for clustering tasks.Existing autoencoder clustering methods mostly use a single-branch encoder network,while the exploration space for a dual-encoder structure combining multiple networks is still significant.To address this,a deep clustering method named DB-WCAE-DA(Deep Clustering Method Based on Dual-branch Wavelet Convolutional Autoencoder and Data Augmentation) is proposed.Firstly,a dual-branch convolutional autoencoder structure is designed by integrating wavelet transformation,mapping the data to a low-dimensional feature space for clustering.Secondly,on one branch,a von mises-fisher(VMF) mixture model is employed to construct a soft clustering assignment,preserving the geometric structure and directional information of the data.On the other branch,data augmentation techniques are introduced,along with the addition of noise in the embedded space,to enhance the encoder’s learning capabilities.Through this dual-branch nested optimization process,the features of the data are continuously refined,resulting in more reliable clustering outcomes.Finally,the effectiveness of the model is validated on multiple benchmark datasets.
Consistent Block Diagonal and Exclusive Multi-view Subspace Clustering
WU Jie, WAN Yuan, LIU Qiujie
Computer Science. 2025, 52 (4): 138-146.  doi:10.11896/jsjkx.240100131
Abstract PDF(2824KB) ( 78 )   
References | Related Articles | Metrics
Subspace clustering method provides an effective solution to the clustering problem of high-dimensional multi-view data.Focusing on the issue that the representation matrix cannot obey the block diagonal property directly by using low rank or sparse constraints in existing algorithms,a consistent block diagonal and exclusive multi-view subspace clustering(CBDE-MSC) is proposed.CBDE-MSCdecomposes the subspace representation matrix of each perspective into consistent and specific self-representation matrices.For the consistent self-representation matrix,block diagonal constraint is used to make it have an approximate block diagonal structure and explore the consistency of the data.The exclusive constraint is applied between specific self-representation matrices to explore the complementarity of data.The matrix L2,1 norm is used to constrain the error matrix so that it satisfies row sparsity.In addition,alternate direction multiplier method(ADMM) is used to optimize the objective function.CBDE-MSC is evaluated by normalized mutual information(NMI),accuracy(ACC),adjusted rand index(AR) and F-score.Expe-rimental results show that compared with some existing excellent algorithms,CBDE-MSC has a great improvement in the results of the four indicators,especially in the YaleB dataset,CBDE-MSC compared with the classical method CSMSC,NMI,ACC,AR and F-score increased by 0.088,0.127,0.145 and 0.122,which verifies the effectiveness of the proposed algorithm.
Knowledge Extraction Algorithm Based on Hypergraphs
LIU Chuan, DU Baocang, MAO Hua
Computer Science. 2025, 52 (4): 147-160.  doi:10.11896/jsjkx.240100084
Abstract PDF(1834KB) ( 102 )   
References | Related Articles | Metrics
Knowledge extraction has always been one of the topics in computer science research.However,some existing know-ledge extraction methods are not sufficient to meet the practical needs in terms of visualization and latent knowledge extraction.It is well known that knowledge consists of definable knowledge and latent knowledge,and definable knowledge can be obtained while the latent knowledge is extracted,but not vice versa.Regarding the extraction of definable knowledge,many achievements have been made,but relatively less attention has been paid to the extraction of latent knowledge,especially how to extract latent knowledge through visualization methods,which is an urgent problem to be solved.Therefore,utilizing the visualization characte-ristics of hypergraphs in the context of information systems,this paper explores the correspondence between information systems and hypergraphs,and proposes methods for their mutual conversion.Using this method,combined with hypergraph theory and rough set theory,a pair of hypergraph-based upper and lower approximation operator is defined.Furthermore,the concept of approximate hypergraphs is proposed,and its properties are explored.The construction of approximate hypergraphs is completed,and an effective method for knowledge extraction under the hypergraph framework is implemented.By comparing with classical and recently proposed approximation theories and knowledge extraction methods,the advantages of the proposed method in terms of approximation and knowledge extraction are demonstrated.For the proposed method,its correctness is verified through practical examples,so that its applicability is indicated.The proposed method is a development and supplement to existing knowledge extraction theories.
Semi-supervised Partial Multi-label Feature Selection
WU You, WANG Jing, LI Peipei, HU Xuegang
Computer Science. 2025, 52 (4): 161-168.  doi:10.11896/jsjkx.240600008
Abstract PDF(2749KB) ( 84 )   
References | Related Articles | Metrics
Multi-label feature selection is a technique for reducing feature dimensionality by filtering out a subset of features with distinguishing power from the original feature space.However,the traditional method faces the problem of labeling accuracy degradation.Real data instances are labeled with a set of candidate labels,which may include noise labels in addition to relevant labels,resulting in biased multi-label data.Existing multi-label feature selection algorithms typically assume accurate labeling of training samples or only consider missing labels.Furthermore,large-scale high-dimensional multi-labeled datasets in real situations often have only a small portion of labeled data.Therefore,this paper presents a new semi-supervised biased multi-label feature selection method.Firstly,considering the partial multi-label issue,this paper learns the true relationships between labels from samples with known labels.Then,the structural consistency between the feature space and the label space is maintained by using the stream regularization technique.Secondly,considering the label missing issue,this paper considers unlabeled data and enhance the label information by a label propagation algorithm.Additionally,considering the high-dimensional feature,this paper applies low-rank constraints to the mapping matrix to expose implicit connections between labels.It also selects features with strong distinguishing ability by introducing l2,1 norm constraints.Experimental results demonstrate significant performance advantages of our method compared to existing semi-supervised multi-label feature selection methods.
Index for Counting the Shortest Cycle Based on Strongly Connected Components
YANG Ying, ZHOU Junfeng, DU Ming
Computer Science. 2025, 52 (4): 169-176.  doi:10.11896/jsjkx.240200105
Abstract PDF(1510KB) ( 99 )   
References | Related Articles | Metrics
The shortest cycle counting is a basic pattern of graph analysis.The shortest cycle counting through a certain vertex refers to the number of shortest cycles passing through the vertex.In real life,the shortest cycle counting has a wide range of applications,such as fraudulent transaction detection,criminal pre-screening,and file sharing optimization.In response to the problems of large indexing space and low query efficiency of existing methods,this paper studies the construction of shortest cycle counting indexing on the original graph,and proposes a trough shortest cycle counting index(STC index) that does not require graph transformation operations.The index classifies the shortest cycles according to their characteristics,constructs different indexing information for different types of shortest cycles,and can directly construct indexes based on the original graph,and further improve query efficiency while ensuring that the indexing size does not expand and the indexing construction time does not increase.In addition,based on the special relationship between cycles and strongly connected components,this paper proposes an indexing strategy based on strongly connected components.By constructing shortest cycle counting indexing within strongly connected components,it can further improve the efficiency of indexing construction,effectively reduce the index size,and improve query efficiency.Experiments on 10 real datasetsverify the efficiency of the proposed STC index and the strategy based on strongly connected components,which can effectively reduce the indexing space and improve the efficiency of indexing construction and querying.
Computer Graphics & Multimedia
Complex Organ Segmentation Based on Edge Constraints and Enhanced Swin Unetr
PENG Linna, ZHANG Hongyun, MIAO Duoqian
Computer Science. 2025, 52 (4): 177-184.  doi:10.11896/jsjkx.240600007
Abstract PDF(3140KB) ( 100 )   
References | Related Articles | Metrics
To address the challenges of organ edge blurring and significant differences in organ proportions in abdominal CT multi-organ segmentation tasks,this paper proposes a complex organ segmentation approach based on edge constraints and enhanced Swin Unetr.To extract features of varying degrees of granularity from organs with different voxel proportions,this study introduces the Masked Attention Block.By computing the mask information of each organ,the corresponding features are extracted.Subsequently,based on dataset priors and mask information,refined feature extraction is conducted within appropriate window and block sizes to obtain the fine-grained features necessary for segmenting organs with smaller voxel proportions.Upon generating preliminary semantic segmentation predictions,to fully leverage boundary information and enhance the model’s ability to handle such information,the semantic features are further extracted through convolutional layers to capture boundary details.The model’s semantic segmentation results are constrained by the edge prediction task through edge loss minimization.The proposed method is trained and tested on the BTCV and TCIA pancreas-CT datasets.The enhancement modules are incorporated into the UNet++based on convolutional networks and the Swin Unetr based on Transformers for training.Comparative experiments are conducted with classic networks such as Unetr.On the BTCV dataset,the Dice coefficients reach 0.847 9 and 0.840 6,with corresponding Hausdorff distances of 11.76 and 8.35,respectively.Overall,the proposed method outperforms other compa-rative methods,confirming its effectiveness and feasibility.
Adaptive Contextual Learning Method Based on Iris Texture Perception
KONG Jialin, ZHANG Qi, WEI Jianze, LI Qi
Computer Science. 2025, 52 (4): 185-193.  doi:10.11896/jsjkx.250100022
Abstract PDF(2929KB) ( 96 )   
References | Related Articles | Metrics
The microstructures in the iris exhibit a high degree of individual distinctiveness,making iris recognition an ideal choice for identity verification.In addition to the characteristics of the microstructures themselves,the context among them also serves as an effective cue for identity verification.To address the correlations between iris microstructures,an adaptive contextual learning method based on iris texture perception is proposed.This method improves upon the dual-ranch structure of contextual measures model by incorporating channel attention and efficient multi-scale attention mechanisms.These mechanisms dynamically adjust the feature maps adaptively,capturing features from different levels of detail distribution and enhancing sensitivity to iris microstructures.To thoroughly explore the correlation between global and local features,attention mechanisms are employed to adaptively fuse the features extracted by the dual-branch network.This weighting approach flexibly assigns diffe-rent weights based on the importance or relevance of the input,aiming to learn the optimal feature associations.The experimental results demonstrate that the adaptive contextual learning method performs excellently in iris recognition tasks,surpassing existing baseline methods across multiple evaluation metrics,with higher recognition accuracy and stronger generalization ability.
Research on Individual Identification of Cattle Based on YOLO-Unet Combined Network
ZHOU Yi, MAO Kuanmin
Computer Science. 2025, 52 (4): 194-201.  doi:10.11896/jsjkx.240100144
Abstract PDF(4407KB) ( 101 )   
References | Related Articles | Metrics
Non-contact method for cattle individual identification has advantages in cost reduction of identification,simplification of identification process and accurate identification,which has been fully developed in cattle individual identification in recent years.There are some problems in existing research,such as recognition accuracy affected by external factors(environment,weather,etc.) too much and difficult transfer learning.In view of the above problems,a cattle individual identification model with 3 modules based on YOLO-Unet combined network is proposed.Firstly,the image extraction module is constructed by YOLOv5 model to extract cattle facial images.Then,the background removal module is constructed by Unet model to remove the background of cattle facial images for eliminating the environmental impact,so as to improve the model’s generalization ability.Finally,the individual classification module is constructed by MobileNetV3 model to classify the cattle facial images whose background removed.Ablation experiment is performed on the background elimination module.The experimental result shows that the background removal module can greatly improve the model’s generalization ability.The recognition accuracy of the model with background removal module is 90.48%,which is 11.99% higher than that of the model without background removal module.
An Improved YOLOv8 Object Detection Algorithm for UAV Aerial Images
HU Huijuan, QIN Yifeng, XU Heand LI Peng
Computer Science. 2025, 52 (4): 202-211.  doi:10.11896/jsjkx.240500042
Abstract PDF(4656KB) ( 113 )   
References | Related Articles | Metrics
Aiming at the problems of diverse target scales,complex backgrounds,small target aggregation,and limited computing resources of drone platforms target detection of aerial images,an improved YOLOv8 target detection algorithm YOLOv8-CEBI is proposed.Firstly,a lightweight Context Guided module is introduced into the backbone network to significantly reduce the number of model parameters and computation.At the same time,a multi-scale attention mechanism EMA is introduced to capture fine-grained spatial information and improve the detection ability for small targets and complex backgrounds.Secondly,the weighted bidirectional feature pyramid network BIFPN is introduced to transform the neck,and the multi-scale feature fusion ability is enhanced under the premise of maintaining the parameter cost.Finally,the Inner-CIOU loss function is used to generate the auxi-liary bounding box to calculate the loss more accurately and accelerate the bounding box regression process.Experiments on the VisDrone dataset show that compared with the original YOLOv8s algorithm,the proposed method parameter amount is reduced by 51.3 %,the computation amount is reduced by 28.5 %,and the mAP50 is increased by 1.6 %.The proposed model ensures the improvement of accuracy and achieves a balance between reducing computing resources and ensuring accuracy.
Research on Virtual Reality Head Rotation Gain Under the Influence of Pitch Angle
XIE Zehua, FU Yueyao, HE Yu, XU Senzhe, REN Yangfu, YU Ge, ZHANG Songhai
Computer Science. 2025, 52 (4): 212-221.  doi:10.11896/jsjkx.240700040
Abstract PDF(4150KB) ( 91 )   
References | Related Articles | Metrics
Increasing head rotation gain in virtual reality significantly enhances user exploration efficiency.Due to spatial constraints,users often need to perform extensive head rotations to observe the entire scene,which may sacrifice comfort.Therefore,applying head rotation gain enables wide-angle rotations in virtual scenarios,allowing users to comprehensively view the environment.We note that in exploring virtual reality scenes,users frequently need to look up or down,yet current research lacks sufficient focus on head rotation thresholds in looking up or down positions.Consequently,this paper focuses on the differences in head rotation gain among looking up,looking down,and looking straight,as well as exploring the comfortable head rotation threshold ranges after incorporating pitch angles.By designing virtual scenes that guide users to rotate their heads at various pitch angles and collecting feedback through psychophysical experiments,this study compares head rotation gains across different states.The results indicate that introducing pitch angles significantly affects users’ perception of rotation thresholds,with notable differences in comfortable rotation thresholds at various pitch angles.
Fast Contour-based Object-space Hidden Line Removal Algorithm for Mesh
SONG Haichuan, QIU Sunhong, WANG Xinxing, LI Yijin, CHEN Zhenhua, CHEN Xiaodiao
Computer Science. 2025, 52 (4): 222-230.  doi:10.11896/jsjkx.240700042
Abstract PDF(4328KB) ( 99 )   
References | Related Articles | Metrics
Hidden line removal,which eliminates lines occluded under certain viewing angles,is a key technique for addressing visual clutter issues in 3D scenes.Object-space hidden line removal techniques can calculate the precise locations of visibility transformation points,making them widely used in practical engineering for 3D visualization modeling,high-precision drawing,and other purposes.While there are many mature object-space hidden line removal algorithms available for planar polyhedra,these algorithms often suffer from low computational efficiency when handling commonly used mesh models in practical engineering due to the large number of triangles within model surfaces.To address this issue,this paper proposes a fast contour-based object-space hidden line removal algorithm for mesh.This algorithm filters triangular facets based on the intersection of mesh object contour line projections and performs intersection calculations,thereby avoiding most redundant intersection computations.Additionally,after intersection calculations,the algorithm rapidly determines visibility based on the line segments where potential visibilitytransformation points lie in relation to the contour lines and model,further enhancing efficiency.Experimental results show that when processing the hidden line removal of ordinary and complex mesh models in two common hidden line removal modes,the efficiency of the algorithm presented in this paper is over 20 times and 80 times higher,respectively,than compared algorithm,and the efficiency difference between our algorithm and the mainstream geometric kernel ACIS is within 2.5 times.
Long-tail Distributed Medical Image Classification Based on Large Selective Nuclear Bilateral-branch Networks
SUN Tanghui, ZHAO Gang, GUO Meiqian
Computer Science. 2025, 52 (4): 231-239.  doi:10.11896/jsjkx.240700039
Abstract PDF(2497KB) ( 95 )   
References | Related Articles | Metrics
In medical scenarios,datasets often exhibit characteristics of a long-tailed distribution,where in the imbalance may cause models to favor head classes,resulting in poorer performance in identifying tail classes and thus affecting model accuracy.Common approaches involve data augmentation to transform original data into a balanced distribution.However,the quality of augmented tail class samples is often inadequate,failing to genuinely improve the classification accuracy of tail classes.Addressing this issue,this paper proposes a large selective kernel bilateral branch network model(LSKBB).The model mainly consists of two parts:the traditional learning branch and the re-balancing branch.It adopts the LSK module to acquire key information and focus on contextual information.Additionally,a dynamic loss function is designed to enable the model to transition gradually from one focus direction to another,thereby enhancing classification accuracy.In image classification experiments conducted on medical datasets with long-tail distributions without altering their characteristics,the proposed LSKBB model shows performance improvements compared to existing methods.When the imbalance ratios are 10,50,and 100,the accuracy of the LSKBB model increases by 1.41%,1.25%,and 1.25%,respectively,on BreaKHis dataset.On ChestX-ray dataset,the accuracy increases by 6.10%,3.15%,and 2.47%,respectively.The experimental results indicate that the LSKBB model achieves good performance under different imbalance ratios and is suitable for classification and detection on medical datasets with long-tail distributions.
Artificial Intelligence
Automatic Optimization and Evaluation of Prompt Fairness Based on Large Language Model Itself
ZHU Shucheng, HUO Hongying, WANG Weikang, LIU Ying, LIU Pengyuan
Computer Science. 2025, 52 (4): 240-248.  doi:10.11896/jsjkx.240900008
Abstract PDF(2513KB) ( 142 )   
References | Related Articles | Metrics
With the rapid development of large language models,the issue of model fairness has garnered increasing attention,primarily focusing on biases in generated text and downstream tasks.To produce fairer text,careful design and examination of the fairness of prompts are necessary.This study employs four Chinese large language models as optimizers to automatically and ite-ratively generate fair prompts that describe both advantaged and disadvantaged groups.Additionally,it investigates the impact of variables such as model temperature,initial prompt types,and optimized directions on the optimization process,while assessing the fairness of various prompt styles,including chain-of-thought and persona.The results indicate that large language models can effectively generate prompts that are either less biased or more biased,with prompts for advantaged groups performing better at lower temperature settings.Generating biased prompts is relatively more challenging,with the models employing anti-adversarial strategies to tackle this task.Using questions as initial prompts can yield outputs that are more random yet of higher quality.Different models exhibit distinct optimization strategies,with chain-of-thought and debiasing styles producing fairer text.Prompts play a crucial role in model fairness and warrant further investigation into their fairness.
Multi-turn Dialogue Tutoring Model Based on Knowledge Forest
XIAO Xinyuan, TANG Jiuyang
Computer Science. 2025, 52 (4): 249-254.  doi:10.11896/jsjkx.240700129
Abstract PDF(2567KB) ( 84 )   
References | Related Articles | Metrics
In the intelligent tutoring scenario,the intelligent tutoring algorithm of traditional knowledge graph cannot represent the cognitive relationship between knowledge topics,and the traditional recommendation method lacks the dynamic interaction design between students and the knowledge system.These two problems will lead to students being lost in the learning process.Multi-turn dialogue tutoring model based on knowledge forest is from the perspective of the knowledge forest model.This paper proposes a tutoring model that covers the sequence structure of knowledge,knowledge centrality and difficulty,and dynamic interactive.It quantitatively evaluates the cost of knowledge lost faced by students in the learning process around the centrality and prior knowledge.In order to verify the effectiveness of the algorithm,this paper quantitatively analyzes the knowledge disorientation problem of random learning strategy,and carries out comparative experiments between multi-turn dialogue tutoring model and random learning strategy on subject datasets GeoQSP,HisQSP,PhyQSP and DSAQSP.The experimental results show that the multi-turn dialogue tutoring model based on knowledge forest can greatly alleviate the problem of knowledge lost caused by the lack of cognitive relationship between knowledge topics and the lack of dynamic interaction ability.
Automatic Identification and Classification of Topical Discourse Markers
YANG Jincai, YU Moyang, HU Man, XIAO Ming
Computer Science. 2025, 52 (4): 255-261.  doi:10.11896/jsjkx.240100155
Abstract PDF(1793KB) ( 95 )   
References | Related Articles | Metrics
Discourse markers,a kind of linguistic markers at the pragmatic level which have functions of organizing discourse,guiding signifier,and expressing emotions,have attracted extensive attention in linguistics.The accurate identification of discourse markers and categories plays an important role in the comprehension of text and the grasp of the speaker’s intention and emotion.In the past decade,scholars at home and abroad have conducted research on function,characteristics,sources and systematic classification of discourse markers and achieved rich results.However,due to the changeable forms,diverse sources,abstract features,and variants,it is difficult for machines to automatically identify discourse markers.In this paper,an NFLAT pointer network model integrating external linguistic features is proposed,which takes topical discourse markers as the research object,and realizes the automatic recognition and classification of discourse markers in discourse.Experimental results show that the precision of the trained model for the recognition and classification of topical discourse markers reaches 94.55%.
CGR-BERT-ZESHEL:Zero-shot Entity Linking Model with Chinese Features
PAN Jian, WU Zhiwei, LI Yanjun
Computer Science. 2025, 52 (4): 262-270.  doi:10.11896/jsjkx.240100119
Abstract PDF(2226KB) ( 77 )   
References | Related Articles | Metrics
Currently,the research on entity linking tasks is less on Chinese entity links,emerging entities and unknown entity links.Additionally,traditional BERT models ignore two crucial aspects of Chinese,namely glyphs and radicals,which provide important syntactic and semantic information for language understanding.To solve the above problems,this paper proposes a zero-shot entity linking model based on Chinese features called CGR-BERT-ZESHEL.Firstly,the model incorporates glyph and radical features by introducing visual image embedding and traditional character embedding,respectively,to enhance word vector features and mitigate the effect of out-of-vocabulary words.Then,a two-stage method of candidate entity generation and candidate entity ranking is used to obtain the results.Experimental results on the two datasets which include Hansel and CLEEK show that compared with the baseline model,the performance metric Recall@100 is improved by 17.49% and 7.34% in the candidate entity generation stage,and the performance metric accuracy is improved by 3.02% and 3.11% in the candidate entity ranking stage.Meanwhile,the proposed model also outperforms other baseline models in both Recall@100 and Accuracy metric.
Improved Genetic Algorithm with Tabu Search for Asynchronous Hybrid Flow Shop Scheduling
WANG Sitong, LIN Rongheng
Computer Science. 2025, 52 (4): 271-279.  doi:10.11896/jsjkx.240600049
Abstract PDF(2420KB) ( 124 )   
References | Related Articles | Metrics
Compared with the traditional assembly line,the hybrid flow shop has higher flexibility and can adapt to the changing production scenarios.However,the solution complexity of its scheduling scheme is higher,which is a common problem in modern actual manufacturing systems.Aiming at the problems of high computational difficulty and low search efficiency of swarm intelligence evolutionary algorithm in solving this problem,an improved genetic algorithm with tabu search is proposed to minimize the total completion time.According to the characteristics of all workpieces in the scheduling problem with the same production process,the large number of workpieces,and the different speeds of parallel machines in each stage,the proposed algorithm adopts a single-layer coding based on the first-stage workpiece order,a decoding method considering the three-layer priority of machine selection,a variety of genetic operators and tabu search operators.It has better search performance and improves the convergence speed of the algorithm on the basis of ensuring the quality of the solution.Finally,the performance of the algorithm is evaluated by 40 instances and workshop cases,and is compared with other algorithms.The experimental results show that the proposed algorithm performs well in solving medium-scale instances,large-scale instances and processing workshop cases.The completion time of the scheduling results is shortened by 10.71% on average.The number of iterations required for the algorithm to reach the optimal solutions is reduced by 25.72%,and the running time is shortened by 10.79%.
Method for Selecting Observers Based on Doubly Resolving Set in Independent Cascade Model
CHEN Zhangyuan, CHEN Ling, LIU Wei, LI Bin
Computer Science. 2025, 52 (4): 280-290.  doi:10.11896/jsjkx.240300127
Abstract PDF(4795KB) ( 83 )   
References | Related Articles | Metrics
With the development of the Internet,rumor information may quickly spread on social networks.Finding the source of rumor can effectively help stop the spread of negative influences,so the source localization problem has great research value.At present,the most effective source localization method is based on observation nodes.But the existing selection of observation nodes has not considered the uniformity of node distribution in the network,and the number of observation nodes is pre-set without reasonably considering the topological characteristics of the graph.This paper studies the strategy of observer nodes placement from two perspectives:node budget threshold and node coverage threshold.Taking into account the activation status of observer nodes and the discriminative distance to the source set,a novel K-differentiation algorithm is proposed,which initially selects the initial observer nodes based on the concept of the doubly-resolving set.Subsequently,an anchor node is chosen to greedily select observer nodes based on a combination of coverage and budget differences to reach the specified thresholds for coverage and budget.The algorithm then addresses two problems related to budget and coverage proposed in this paper when choosing observer nodes for experimentation within the same source localization algorithm.Experiments are conducted in real world social datasets.The proposed algorithm for selecting observer nodes is compared with various alternatives within the same source localization algorithm.The results indicate that the accuracy of source localization and average error distance by our algorithm outperforms other algorithms and achieve excellent results in the large dataset with 5%~10% observer nodes.
High Performance Computing
Input-aware Generalized Matrix-Vector Product Algorithm for Adaptative PerformanceOptimization of Hygon DCU
LI Qing, JIA Haipeng, ZHANG Yunquan, ZHANG Sijia
Computer Science. 2025, 52 (4): 291-300.  doi:10.11896/jsjkx.241100030
Abstract PDF(4567KB) ( 91 )   
References | Related Articles | Metrics
GEMV(generalized matrix-vector multiplication function) is the core component of BLAS(basic linear algebra subroutine) algorithm library,which is widely used in the fields of computer science,engineering computation and mathematical computation.Currently,with the continuous iterative upgrading of the domestic Hygon DCU version,there is also a certain competitive advantage between the Hygon DCU and the traditional GPU manufacturer before.With the continuous expansion of the application field of GEMV,the input characteristics of GEMV also reflect a diversified tendency,in which case,relying on a single optimization method,it is not possible to realize the GEMV algorithm in all inputs of GPU computing platforms in the cases with high performance.Therefore,in this paper,on the basis of traditional optimization means such as access optimization,instruction rearrangement,parallel statute,shared memory,thread scheduling,we propose an input-aware performance adaptive optimization method,which is able to automatically adjust the implementation of the computation kernel according to the different sizes and shapes of the input matrices in order to achieve the optimal performance,and significantly improves the performance of GEMV on a Hygon DCU.Experimental results show that the overall performance of the generalized matrix-vector multiplication algorithm for input awareness implemented in this paper on the Hygon DCU Z100SM is significantly better than the related algorithms in the RocBLAS library,with a maximum performance improvement of 3.020 3 times that of the corresponding algorithms in the RocBLAS library for different matrix input sizes.
Lightweight Heterogeneous Secure Function Computing Acceleration Framework
ZHAO Chuan, HE Zhangzhao, WANG Hao, KONG Fanxing, ZHAO Shengnan, JING Shan
Computer Science. 2025, 52 (4): 301-309.  doi:10.11896/jsjkx.240600046
Abstract PDF(2630KB) ( 66 )   
References | Related Articles | Metrics
Currently,data has become a crucial strategic resource,and data mining and analysis technologies play an important role in various industries.However,there are risks of data leakage in the process of data mining and analysis.Secure function evaluation(SFE in short) can perform computation of arbitrary functions while ensuring data security.Yao’s protocol is a protocol used for secure function computation,which involves a significant amount of encryption and decryption operations in the garbled circuit(GC) generation and evaluation phases.It has high computational overhead in the oblivious transfer(OT) phase,making it challenging to meet the demands of complex real-world applications.Aimed at the efficiency issues of Yao’s protocol,heterogeneous computing based on field programmable gate array(FPGA) accelerates the Yao’s protocol and combines the proposed lightweight proxy oblivious transfer protocol,ultimately designing a lightweight heterogeneous secure computation acceleration framework.In this solution,a CPU-FPGA heterogeneous computing architecture is implemented for both the garbled circuit generation and the proxy computation tasks.This architecture leverages the advantages of CPU in handling control flow and the parallel processing capabilities of FPGA to accelerate the garbled circuit generation and evaluation phases,increasing the efficiency of generating and evaluating garbled circuits and reducing computational pressure.In addition,compared to the oblivious transfer protocol implemented through asymmetric cryptographic algorithms,in the lightweight proxy oblivious transfer protocol,only symmetric operations are required for the garbled circuit generator and the proxy calculator.The proxy calculator can then obtain the random number held by the generator corresponding to the user’s input.This lightweight proxy oblivious transfer protocol alleviates the computational pressure on the user and the server during the oblivious transfer phase.Experimental results show that in a local area network environment,compared to software implementation of Yao’s protocol(TinyGarble framework),our solution improves computational efficiency by at least 128 times.
Information Security
Research on Consensus Protocols in Asynchronous Network Model
WANG Di, LEI Hang, CAO Guangping
Computer Science. 2025, 52 (4): 310-326.  doi:10.11896/jsjkx.240600132
Abstract PDF(3432KB) ( 75 )   
References | Related Articles | Metrics
As the development of distributed systems progresses,the consensus problem has garnered widespread attention in the field of computer science.The inevitable asynchrony in distributed systems greatly impacts the design of consensus protocols.However,the FLP impossibility result indicates that in asynchronous systems,there is no definitive consensus protocol capable of enabling a distributed system to reach agreement.Researchers have conducted extensive studies on how to circumvent the FLP conclusion in the field of asynchronous consensus.This paper first analyzes the relevant definitions and theories of the distributed consensus problem,summarizing and concluding the definition of asynchronous consensus protocols.Then elaborates on the development trajectory,implementation methods,and performance metrics of asynchronous consensus protocols from the perspective of different strategies to circumvent the FLP conclusion.The paper analyzes and summarizes the advantages and disadvantages of avoiding the FLP conclusion through methods such as randomization,partial synchronous models,failure detectors,conditional restrictions,and hybrid consensus approaches.It points out that the design of asynchronous consensus protocols mostly remains at the theoretical stage and is difficult to put into practical use.Finally,briefly discusses the issue of equivalence in consensus,hoping to expand the implementation pathways of consensus protocols and promote the innovation and development of asynchronous consensus protocols.
Multi-view and Multi-scale Fusion Attention Network for Document Image Forgery Localization
MENG Sijiang, WANG Hongxia, ZENG Qiang, ZHOU Yang
Computer Science. 2025, 52 (4): 327-335.  doi:10.11896/jsjkx.240100142
Abstract PDF(3079KB) ( 93 )   
References | Related Articles | Metrics
With the improvement and application of various digital platforms,document images have been widely spread on the Internet.At the same time,the development of image processing technology has increased the risk of document image tampering,making it crucial to ensure the integrity and authenticity of document images.In this paper,we propose multi-view and multi-scale fusion attention network(MM-Net),aiming for improving the accuracy of document image forgery localization in real-world.We adopt multi-view encoder combined with RGB information,noise information,and character information to fully extract tampering features.A multi-scale fusion attention module is designed to facilitate the interaction of multi-scale features,thus enhancing important content information in document images.Extensive experimental results on the large-scale dataset DocTamper demonstrate that the proposed MM-Net achieves more precise localization of tampered regions in document images,with F-score of 0.809,0.807,and 0.774 on the test dataset,cross domain dataset FCD and SCD,respectively.Moreover,MM-Net exhibits good generalizability and robustness.
Self-supervised Backdoor Attack Defence Method Based on Poisoned Classifier
WANG Yifei, ZHANG Shengjie, XUE Dizhan, QIAN Shengsheng
Computer Science. 2025, 52 (4): 336-342.  doi:10.11896/jsjkx.240100005
Abstract PDF(2048KB) ( 81 )   
References | Related Articles | Metrics
In recent years,the rapid ascension of Self-Supervised Learning(SSL) networks has become a pivotal force propelling advancements in the realm of deep learning.This surge in prominence is particularly evident with the introduction of pre-trained image models and large language models(LLM),capturing widespread attention on a global scale.However,amidst this progress,recent investigations have brought to light the susceptibility of self-supervised learning networks to backdoor attacks,posing a significant challenge to their robustness.The vulnerability arises from the potential manipulation of pre-trained models’ perfor-mance on downstream tasks through the incorporation of a limited number of training samples carrying malicious backdoors into the training dataset.Recognizing the critical need to fortify against such SSL backdoor attacks,our response comes in the form of a novel defense mechanism known as defending by poisoned classifier(DPC),leveraging the capabilities of a poisoned classifier.DPC operates by training a threat model on a dataset intentionally contaminated with adversarial samples.This strategic approach enables our method to accurately identify and detect toxic samples,thereby establishing a formidable defense against potential threats embedded within the training data.The experimental outcomes are compelling,showcasing that assuming the blocking of the backdoor trigger can effectively modify the activation state of downstream clustering models,DPC defence achieves a 91.5% recall rate for backdoor trigger detection and a 27.4% precision rate in our experiments,outperforming the original SOTA me-thod.These results underscore the effectiveness of the proposed method is not only fortifying self-supervised learning networks against potential threats but also in elevating their overall security posture.By providing a robust defense mechanism,DPC contri-butes significantly to ensuring the integrity and reliability of self-supervised learning models in the face of evolving challenges in the dynamic landscape of deep learning.
Persistent Backdoor Attack for Federated Learning Based on Trigger Differential Optimization
JIANG Yufei, TIAN Yulong, ZHAO Yanchao
Computer Science. 2025, 52 (4): 343-351.  doi:10.11896/jsjkx.240800043
Abstract PDF(2924KB) ( 165 )   
References | Related Articles | Metrics
The distributed nature of federated learning allows each client to train the model while maintaining data independence,but this also allows attackers to control or mimic some clients to launch backdoor attacks by implanting carefully designed fixed triggers to manipulate the model output.The effectiveness and persistence of triggers are important criteria for measuring attack effectiveness.Effectiveness pertains to the rate of successful breaches,while persistence embodies the capability to sustain a high success rate even after the cessation of the attack.At present,research on effectiveness has been relatively in-depth,but maintaining the persistence of triggers remains a challenging issue.A backdoor attack method based on dynamic optimization triggers is proposed to extend the persistence of triggers.Firstly,during dynamic updates in federated learning,triggers are synchronously optimized to minimize the difference between the potential representations of trigger features during and after attacks,thereby training the global model's ability to remember trigger features.Secondly,using redundant neurons as an indicator of the success of implanting backdoors to adaptively add noise and enhance the effectiveness of attacks.Extensive experiments on the MNIST,CIFAR-10,and CIFAR-100 datasets have shown that the proposed scheme effectively extends the persistence of triggers in fede-rated learning environments.Under five kind of representative defense systems,the success rate of attacks is higher than 98%,especially after more than 600 rounds of attacks on the CIFAR-10,the success rate of attacks still exceeds 90%.
Blockchain-based Highly Trusted Query Verification Scheme for Streaming Data
YANG Fan, SUN Yi, LIN Wei, GAO Qi
Computer Science. 2025, 52 (4): 352-361.  doi:10.11896/jsjkx.240100184
Abstract PDF(2431KB) ( 86 )   
References | Related Articles | Metrics
With the popularization of intelligent IoT applications,IoT devices are required to continuously collect a large amount of streaming data for real-time processing.Due to their resource constraints,a large amount of stream data must be outsourced to server storage management.How to ensure the integrity of stream data with strong real-time and infinite growth is a complex and challenging problem.Although research has proposed schemes for streaming data integrity verification,the correctness and data integrity of query results returned by malicious servers in untrustworthy outsourced storage service environments are still not guaranteed.Recently,the emergence of blockchain technology based on distributed consensus implementation brings new solution ideas and methods to the data integrity verification problem,therefore,this paper proposes a highly trustworthy streaming data query verification scheme based on the immutability of blockchain,and designs a low-maintenance data structure CS-DCAT on the blockchain,which only stores the root node hash value of the authentication tree on the blockchain.It is suitable for processing streaming data with unpredictable data volume and can realize range query verification of streaming data.The security analysis proves the correctness and security of this scheme,and the performance evaluation shows that this scheme can realize low gas overhead on the blockchain,and the computational complexity of range query and verification is only related to the current data volume,which does not introduce too much extra computational cost and communication overhead.
Study on Smart Contract Vulnerability Repair Based on T5 Model
JIAO Jian, CHEN Ruixiang, HE Qiang, QU Kaiyang, ZHANG Ziyi
Computer Science. 2025, 52 (4): 362-368.  doi:10.11896/jsjkx.240800039
Abstract PDF(2863KB) ( 111 )   
References | Related Articles | Metrics
The current research on addressing vulnerabilities in Ethereum smart contracts primarily focuses on manually defined templates.This method requires developers to have extensive expertise,and its effectiveness is poor when dealing with complex vulnerabilities.This paper explores vulnerability repair techniques for smart contracts at the source code level in Solidity.By introducing a machine learning approach to vulnerability repair,we designe and implement a T5 model-based smart contract vulnerability repair system to tackle the problem of depending on manual intervention.Using data crawling and data augmentation techniques,we compile a training dataset specifically for the T5 model.The T5 model for repairing smart contract vulnerabilities is trained using machine learning techniques.A test dataset is constructed through web crawling to evaluate the system’s perfor-mance from various perspectives.The system’s accuracy in contract repair,gas consumption,and introduced code volume is compared with other contract vulnerability repair tools such as TIPS,SGUARD,and Elysium.Experimental results show that our system achieves good repair outcomes and overall performance superior to other vulnerability repair tools.
Defense Architecture for Adversarial Examples of Ensemble Model Traffic Based on FeatureDifference Selection
HE Yuankang, MA Hailong, HU Tao, JIANG Yiming
Computer Science. 2025, 52 (4): 369-380.  doi:10.11896/jsjkx.240200092
Abstract PDF(5865KB) ( 159 )   
References | Related Articles | Metrics
Currently,anomaly traffic detection models that leverage deep learning technologies are increasingly vulnerable to adversarial example attacks.Adversarial training has emerged as a potent defense mechanism against these adversarial attacks.By incorporating adversarial examples into the training process,it aims to enhance the model’s robustness,making it more resistant to similar attacks in the future.However,this approach is not without its drawbacks.While it indeed increases the model’s robustness,it also inadvertently leads to a decrease in the model’s detection accuracy.This trade-off between robustness and accuracy has become a pivotal concern in the realm of deep learning-based anomaly detection,sparking intense debate and research within the academic community.Addressing this critical issue,this paper proposes a novel framework that seeks to balance the model’s detection performance with its robustness against adversarial attacks.Drawing inspiration from ensemble learning,we construct a multi-model adversarial defense framework.This framework not only enhances the model’s adversarial robustness but also aims to improve its detection performance.By integrating proactive feature differential selection with passive adversarial training,we develop a comprehensive strategy that fortifies the model against adversarial threats while maintaining high detection accuracy.The model consists of a feature differential selection module,a detection body integration module,and a voting decision module,to address the issue that a single detection model cannot balance detection performance and robustness,and the problem of defense lagging.In the aspect of model training,we introduce a sophisticated method for constructing training data based on feature differential selection.This method involves selectively combining traffic features that exhibit significant differences,thereby creating a set of differentiated traffic example data.These examples are then used to train multiple heterogeneous detection models.This approach is designed to bolster the models’ resistance to adversarial attacks targeted at single models,presenting a more formidable challenge to attackers.Furthermore,the framework includes a novel adjudication mechanism for the detection results produced by the multiple models.Leveraging an improved heuristic population algorithm,we optimize the ensemble model’s adjudication strategy.This not only enhances the detection accuracy but also significantly increases the complexity and difficulty of generating effective adversarial examples,thereby providing an additional layer of defense.Experimental results underscore the efficacy of the proposed method.Compared to traditional single -model adversarial training approaches,the multi-model framework demonstrates a substantial improvement,with nearly a 10% increase in both accuracy and robustness.
Two-dimensional Dynamic Coupled Mapping Lattice System Based on m-sequence and Spherical Cavity and Its Characteristics
MA Yingjie, TIAN Yan, ZHAO Geng, YANG Yatao, QIN Jingying, HONG Hui
Computer Science. 2025, 52 (4): 381-391.  doi:10.11896/jsjkx.240100008
Abstract PDF(5073KB) ( 86 )   
References | Related Articles | Metrics
In order to solve the problems of low dynamic complexity and high correlation between generated sequences in existing spatiotemporal chaotic systems,two-dimensional dynamic coupled map lattices system based on m-sequences and spherical cavity is proposed.The improved m-sequence is used to construct an iterative matrix,which is used to determine the index and perturbation symbol of the coupled objects of each lattice in the iterative process.The piecewise chaotic mapping constructed through a spherical cavity introduces controllable perturbations to the chaotic sequence.The dynamic characteristics of the system are evalua-ted by bifurcation diagram,regression mapping,Lyapunov exponent and K-entropy,while the correlation,uniformity,random-ness of the generated sequence and the computational complexity of the system are tested and verified.Simulation experiments and comparative analysis show that the system exhibits rich nonlinear dynamical behavior and superior performance metrics,including period doubling and bifurcation phenomena,with potential applications in data encryption,image encryption,and S-box generation.