Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors

Featured ArticlesMore...

  • Volume 53 Issue 2, 15 February 2026
      
      Educational Data Mining Based on Graph Machine Learning
      Review of Personalized Educational Resource Recommendations
      XI Penghui, WU Xiazhen, JIANG Wencong, FANG Liangda, HE Chaobo, GUAN Quanlong
      Computer Science. 2026, 53 (2): 1-15.  doi:10.11896/jsjkx.250700184
      Abstract ( 192 )   PDF(1941KB) ( 127 )   
      References | Related Articles | Metrics
      Under the background of the “Double Reduction” policy and the ongoing digital transformation of education,persona-lized educational recommender systems(ERSs) have become a key enabler of smart education.By modelling learners’ knowledge mastery,interests,and behavioural patterns,ERS supports personalised instruction and improves learning efficiency.This paper provides a systematic review of research progress in three core tasks:course recommendation,exercise recommendation,and learning path recommendation.Course recommendation has evolved from traditional collaborative filtering and matrix factorisation to graph neural networks and reinforcement learning,enhancing accuracy and adaptability.Exercise recommendation has shifted from static tag matching to dynamic knowledge tracing and deep learning models,capturing complex learner-item interactions.Learning path recommendation must balance knowledge dependency,learner ability evolution,and multi-objective constraints.Recent approaches integrate graph-based modelling,reinforcement learning,and evolutionary algorithms to optimise personalised paths.The paper also reviews mainstream datasets and performance comparisons,summarising the strengths and limitations of different methods.Finally,it highlights future directions:dynamic knowledge evolution modelling,cross-scenario generalisation,adaptive strategy design,and enhanced interpretability and usability,aiming to transform ERS from static and opaque “black-box” models into dynamic,transparent,and human-centred educational ecosystems.
      Survey on Graph Neural Network-based Methods for Academic Performance Prediction
      ZHAI Jie, CHEN Lexuan, PANG Zhiyu
      Computer Science. 2026, 53 (2): 16-30.  doi:10.11896/jsjkx.250800001
      Abstract ( 104 )   PDF(1690KB) ( 115 )   
      References | Related Articles | Metrics
      Currently,academic performance prediction,as a core component of personalized educational support systems,has become a focal point of research in the field of educational data mining,playing a significant role in optimizing teaching decisions and guiding student development.However,traditional prediction methods struggle to effectively address the challenges posed by the complex correlations,temporal evolution,and group dependencies inherent in multi-source heterogeneous data within educational contexts,resulting in limitations in prediction accuracy and generalization capabilities.Graph Neural Networks(GNNs),leveraging their powerful relational modeling and representation learning abilities,provide a novel paradigm for addressing these challenges.Consequently,numerous researchers are dedicated to applying GNNs to academic performance prediction research.This paper presents a systematic review of current research efforts on GNN-based academic performance prediction tasks.Starting from the problem definition,it analyzes the core challenges of academic performance prediction.It then outlines the foundational knowledge and common models of GNNs.Subsequently,it categorizes and reviews the representative models and their application scenarios for academic performance prediction,including static feature modeling,combined static and dynamic feature modeling,and techniques empowered by emerging large model technologies.Building on this,the paper systematically summarizes and analyzes the evaluation-related datasets and metrics used for GNN-based academic performance prediction methods.Finally,it prospects future research directions from perspectives such as model scalability,interpretability,multimodal semantic information fusion,and dynamic graph pre-training.
      Robust Knowledge Tracing Model Based on Two-level Contrastive Learning
      CHEN Xiaolan, MAO Shun, LI Weisheng, LIN Ronghua, TANG Yong
      Computer Science. 2026, 53 (2): 31-38.  doi:10.11896/jsjkx.250700196
      Abstract ( 93 )   PDF(2824KB) ( 120 )   
      References | Related Articles | Metrics
      Knowledge tracing is key to adaptive learning,aiming to assess students’ knowledge states and predict their future performance.Currently,data sparsity limits existing knowledge tracing models in both question embedding learning and student knowledge state modeling.To address this,some studies have introduced contrastive learning.However,existing contrastive learning methods rely on random perturbations of graph structures(for question embedding) or modifications of learning interaction sequences(for knowledge state modeling) to generate contrastive views,introducing noise and erroneous self-supervised signals.This results in question embeddings that are poorly suited for downstream tasks in learning systems.To overcome these li-mitations,this study proposes an innovative Dual-level Contrastive Learning Framework(DCLF) to simultaneously enhance question embedding learning and student knowledge state modeling in knowledge tracing.DCLF adopts a more effective contrastive paradigm that avoids altering the original data information.Instead,it generates contrastive views through relational transformations of the original data or by leveraging outputs from different neural networks on the same data.Specifically,for embedding learning,the proposed method obtains contrastive views through relational transformations of the data.For student knowledge state modeling,it encodes learning interactions using different neural networks to obtain knowledge states under various encoders.This method extracts rich self-supervised signals from multiple contrastive views,preserving the intrinsic semantic information of the data and effectively avoiding noise introduction.Experiments conducted on three commonly used datasets demonstrate that DCLF outperforms selected existing knowledge tracing models in terms of performance.
      Direction-aware Siamese Network for Knowledge Concept Prerequisite Relation Prediction
      YANG Ming, HE Chaobo, YANG Jiaqi
      Computer Science. 2026, 53 (2): 39-47.  doi:10.11896/jsjkx.250600005
      Abstract ( 71 )   PDF(3141KB) ( 106 )   
      References | Related Articles | Metrics
      Prerequisite relation prediction for knowledge concepts seeks to enhance the curriculum knowledge graph by exploring semantic and topological dependencies among concepts,thereby improving downstream applications such as large-scale resource organization and personalized learning path planning.Existing methods,which mainly rely on feature engineering and deep lear-ning,still struggle to effectively model entity-level semantics and the directional nature of prerequisite relations,leaving room for further improvement.To address this problem,this paper proposes a direction-aware siamese network for knowledge concept prerequisite relation learning(DSN-PRL).Firstly,DSN-PRL employs a contrastive learning-based pre-trained language model,BERT,to capture fine-grained semantic representations of knowledge concepts.It then applies a graph neural network to incorporate multi-hop topological features and enhance hierarchical structure modeling.Finally,a direction-aware siamese network is designed to learn the directional distinctions between concept pairs for accurate prerequisite relation prediction.Experiments conducted on three benchmark datasets demonstrate that DSN-PRL outperforms existing baseline methods across multiple key evaluation metrics.In particular,compared with the best baseline model DGPL,DSN-PRL improves precision by 7.3 percentage points,2.7 percentage points,and 11.4 percentage points,and F1 by 1.6 percentage points,1.3 percentage points,and 4.3 percentage points,respectively.
      Dynamic Recommendation of Personalized Hands-on Learning Materials Based on LightweightEducational LLMs
      ZHAI Jie, LI Yanhao, CHEN Lexuan, GUO Weibin
      Computer Science. 2026, 53 (2): 48-56.  doi:10.11896/jsjkx.250800002
      Abstract ( 52 )   PDF(3350KB) ( 88 )   
      References | Related Articles | Metrics
      The deep integration of artificial intelligence(AI) technology in the education sector has become a core strategy for national educational digital transformation.Within the domain of computer practice teaching,the precise recommendation of practical learning resources serves as a vital pathway to enhance student learning efficacy and quality.Confronting the tension between the scale of higher education and the diversification of student needs,this study proposes a lightweight educational large model-based personalized practice learning resource recommendation framework,named LightPLRec(Lightweight Personalized Learning Re-commender for Dynamic Practice Materials).The model is designed to intelligently recommend tailored practical learning materials in response to the dynamic changes in individual student characteristics.Leveraging a lightweight large model with low computational demands,it constructs the SPIR(Student Profile & Interest-based Recommender) educational large model for personalized practical learning resource recommendation through instruction fine-tuning and reinforcement learning methods.By integrating multi-source heterogeneous data and deeply incorporating the curriculum knowledge system,disciplinary frontiers,industrial development trends,and national strategic orientations,it establishes a cross-disciplinary,multimodal practical learning resource repository and designs the graph2topic method for converting knowledge graphs into thematic text.Empowered by the robust capabilities of the SPIR large model and supported by the multi-source resource repository,it proposes an intelligent workflow-based recommendation method.Specifically,it designs a thematic analysis method to extract student competency features from assessment results,applies the GCN(Graph Convolutional Network) algorithm to mine student interest features from learning behavior data,and creates dual intelligent agents:a “Competency-Recommender Agent” and an “Interest-Recommender Agent”.This constructs a dual-agent collaboratively driven intelligent workflow system,enabling a series of tasks from the intelligent generation of personalized student profiles to the dynamic recommendation of practical learning resources.Furthermore,a persona-lized resource recommendation dataset is constructed,on which the proposed model demonstrates significantly superior perfor-mance compared to baseline models.Specifically,the LightPLRec model trained on the Qwen2.5-3.0B base model demonstrates outstanding performance in both the capability recommendation and interest recommendation tasks,achieving accuracies of 0.947 and 0.939 respectively,surpassing the evaluation results of DeepSeek-V3 on the same dataset.This research provides a technical paradigm for the vertical application of educational large models in specific scenarios.Simultaneously,by creating a dynamic personalized practical learning resource recommendation model,it offers an innovative pathway to implement the principle of “tea-ching students according to their aptitude” and cultivate high-quality computer practice talents.
      CPViG-Net:Students’ Classroom Behavior Recognition Based on Cross-stage Visual GraphConvolution
      ZHANG Haopeng, SHI Zheng, LIU Feng, SONG Wanru
      Computer Science. 2026, 53 (2): 57-66.  doi:10.11896/jsjkx.250500100
      Abstract ( 166 )   PDF(3609KB) ( 77 )   
      References | Related Articles | Metrics
      With the evolution of educational paradigms from “human-computer collaboration” to “human-intelligence collaborative co-education”,the intelligent evaluation of teaching is also facing new requirements and challenges.In recent years,the task that takes student behavior as the starting point has gained widespread attention.Aiming at the challenges of diverse student beha-viors,heavy occlusions and severe background interference in real classroom environments,a cross-stage partial vision graph network(CPViG-Net) is proposed to enhance the accuracy of student behavior detection in complex classroom settings.Based on a classic object detection framework,the model integrates the dynamic feature modeling ability of the vision GNN and constructs the partial max-relative graph convolution(PMG) module and the cross-stage partial fusion(CPF) module.The PMG module captures the neighborhood information with the greatest feature differences between nodes by embedding maximum relative graph convolution,thereby specifically addressing the issue of information loss caused by local occlusions.It also incorporates depthwise separable convolution to reduce the computational cost of the graph convolution algorithm.The CPF module reconstructs the feature structure using fully connected layers and leverages the cross-stage connection mechanism of the C2f module to achieve multi-level feature fusion,thereby enhancing the ability of the model to recognize small-scale objects.In addition,the model proposes optimization strategies for different datasets through the optimization of nearest neighbor K values.On the public dataset SCB03-S,the mAP@50 of CPViG-Net reaches 70.9%,which is a 2 percentage points improvement over the baseline model.Experiments on multiple publicly available datasets demonstrate that the model exhibits good performance and high robustness in addressing the various challenges of student behavior recognition in real classroom scenarios.
      Research on Student Classroom Concentration Integrating Cross-modal Attention and Role
      Interaction
      ZHUO Tienong, YING Di, ZHAO Hui
      Computer Science. 2026, 53 (2): 67-77.  doi:10.11896/jsjkx.250300026
      Abstract ( 91 )   PDF(2036KB) ( 85 )   
      References | Related Articles | Metrics
      With the continuous development of innovative education,schools can assess students’ learning and teachers’ teaching quality by detecting students’ concentration in the classroom to optimize the teaching system.Previous studies have focused on single-modality and single-role feature extraction.However,the teaching classroom is a complex scene with multimodal,multiple roles,and interactions between the roles,so it is of great significance to explore students’ classroom attentiveness from the perspective of multimodal and multiple roles.However,how to effectively model the temporal relevance and semantic interaction between multimodal and how the multiple roles interact is a significant challenge in realizing the judgment of students’ classroom concentration.To address the above problems,a student classroom concentration dataset containing teacher’s audio and student’s video is constructed,and a Long-Short Context Model(LSCM) based on multimodal and multi-role assessment of students’ classroom concentration is proposed,in which multimodal refers to the student’s video and the teacher’s audio.Multi-role refers to the student-to-student and student-to-teacher.The model contains two main modules:the long-term context module and the short-term context module.Specifically,the long-term context module extracts the long-time behavioral characteristics of a single student through the audio self-attention mechanism and the visual self-attention mechanism.The audio-visual cross-attention mechanism enhances the correlation between the audio and visual information.In contrast,the short-term context module focuses on localized time segments to portray the dynamic changes in the attentiveness of multiple students in the classroom environment.Finally,the model outputs the concentration categories of each student in the video.Experiments show that this method significantly improves concentration detection accuracy compared with existing methods by effectively exploiting the complementary nature of multimodal data and the correlation between roles.It also verifies the effectiveness of multimodal fusion and role interaction modeling.
      GTKT:Knowledge Tracing Model Integrating Connectivism Learning and Multi-layer TemporalGraph Transformer
      LI Jiahao, JING Junchang, XU Qian, LIU Dong
      Computer Science. 2026, 53 (2): 78-88.  doi:10.11896/jsjkx.250700188
      Abstract ( 78 )   PDF(3082KB) ( 69 )   
      References | Related Articles | Metrics
      Knowledge Tracing(KT) aims to model learners’ knowledge states based on their historical exercise records and predict their future performance.Traditional KT research primarily focuses on modeling learners’ behavioral sequences while overlooking the topological structure among knowledge concepts.Although recent methods using static knowledge graphs have shown progress,they fail to adequately capture the dynamic graph-structured relationships among learners,questions,and knowledge concepts,thereby ignoring potential correlations in the knowledge acquisition process and limiting model generalizability and interpretability.To address these limitations,this paper proposes a Graph Transformer Knowledge Tracing(GTKT) model that integrates connectivism learning theory with a multi-layer temporal graph Transformer.Firstly,guided by connectivism learning theory,it constructs temporal learner subgraphs to represent historical exercise sequences,proposing a time-aware hierarchical subgraph sampling strategy and a neighbor co-occurrence encoder to discover latent node relationships.Secondly,based on lear-ning and forgetting theories,it designs a multi-band temporal encoder to capture temporal characteristics in learning behaviors and builds a multi-feature fusion module integrating learner-question-knowledge concepts interactions.Thirdly,it develops a multi-layer temporal graph Transformer module for dynamic knowledge state modeling and prediction.Experimental results on six public datasets demonstrate that GTKT outperforms mainstream knowledge tracing models in predicting learner performance.
      Multimodal Physical Education Data Fusion via Graph Alignment for Action Recognition
      CHEN Haitao, LIANG Junwei, CHEN Chen, WANG Yufan, ZHOU Yu
      Computer Science. 2026, 53 (2): 89-98.  doi:10.11896/jsjkx.250800007
      Abstract ( 62 )   PDF(2711KB) ( 62 )   
      References | Related Articles | Metrics
      In the context of intelligent sports and educational informatization,fine-grained human action recognition has become a key technology in physical education and training assessment.To address the limitations of traditional methods in utilizing multi-modal information and capturing spatio-temporal structures in complex motion scenarios,this paper proposes a multi-modal graph convolutional network model that fuses skeleton data and wearable sensor information.Firstly,it proposes a fusion method based on “virtual sensors,” which maps wearable sensor signals onto a spatio-temporal graph constructed from skeletal joints,enabling effective integration of multimodal information and enhancing fine-grained motion modeling and cross-modal semantic consistency.Secondly,it designs a multi-layer graph convolutional network tailored for complex sports movements,incorporating local body part segmentation to improve recognition performance in challenging scenarios.Thirdly,it constructs a high-quality multimodal dataset for fencing,covering various technical actions and skill levels,to support fine-grained action recognition and skill assessment.Experimental results on both this dataset and several public benchmarks demonstrate that the proposed method outperforms existing approaches in both action recognition accuracy and skill level classification.This work provides a novel mode-ling framework and technical support for intelligent recognition and evaluation in sports education.
      DCL-FKT:Personalized Knowledge Tracing via Dual Contrastive Learning and ForgettingMechanism
      LI Chunying, TANG Zhikang, ZHUANG Zhiwei, LI Wenbo, GUO Yanxi, ZHANG Xiaowei
      Computer Science. 2026, 53 (2): 99-106.  doi:10.11896/jsjkx.250600002
      Abstract ( 110 )   PDF(2679KB) ( 90 )   
      References | Related Articles | Metrics
      In the context of educational digitalization,accurately tracking students’ knowledge mastery has become one of the key approaches to improving teaching quality.Knowledge tracing seeks to analyze various types of student behavior data-such as responses to questions and online study duration-to evaluate their mastery of specific knowledge points.Although existing approaches have demonstrated good performance in predicting personalized learning behaviors,they still face two major challenges:1) the widespread issue of data sparsity in educational settings;2) the neglect of the complex and dynamic nature of students’ knowledge acquisition,resulting in an inability to effectively capture changes in knowledge and forgetting patterns.To address these challenges,this paper proposes a personalized knowledge tracing model,DCL-FKT,which integrates dual contrastive lear-ning with a forgetting mechanism.The model alleviates data sparsity through question masking and substitution,building upon the traditional contrastive learning framework,it introduces a feature-level contrastive learning module to eliminate redundant representations and enhance modeling efficiency.In addition,by incorporating a forgetting gate mechanism,the model dynamically simulates the human forgetting curve,allowing it to accurately capture the nonlinear decay of students’ knowledge over time and enabling dynamic modeling of the learning process.Experiments conducted on real-world datasets demonstrate that the proposed model achieves significant improvements in core metrics,such as prediction accuracy.It provides a more accurate reflection of students’ actual knowledge levels and offers reliable support for personalized online learning.
      Computer Architecture
      Local Synchronous FMI Co-simulation Method Based on Thread Pool Task Scheduling
      XUE Zhaoyang, QIAN Xiaochao, LIU Fei
      Computer Science. 2026, 53 (2): 107-116.  doi:10.11896/jsjkx.250200061
      Abstract ( 69 )   PDF(3796KB) ( 72 )   
      References | Related Articles | Metrics
      Parallel simulation is a key means to improve simulation performance.However,parallel co-simulation based on FMI faces many challenges,such as coupling between FMU and threads,input/output synchronization,and mutual exclusion of FMI API.In response to these problems,this paper proposes a local synchronous FMI co-simulation method based on thread pool task scheduling.Firstly,the framework of the method is presented,consisting of a simulation scheme,master algorithm,scheduler,and buffer to provide a modular representation of parallel simulation.Then,the master algorithm of FMI parallel co-simulation is described,which divides a simulation task of a FMU into two stages:simulation execution and task scheduling.The scheduler sche-dules the task to execute.And customizes read-write lock is designed to solve the synchronization problem during the simulation.The output is temporarily stored in a buffer to solve the problem of FMI API contention for access.The accuracy of the proposed method is verified through a room temperature difference model and a ship positioning model.Compared with the non-iterative Jacobi method parallel to FMU,significant performance improvement is achieved.
      Loop Splitting Based on Conditional Statement Invariance Analysis
      HAN Lin, SHAO Jingjing, NIE Kai, LI Haoran, LIU Haohao, CHEN Mengyao
      Computer Science. 2026, 53 (2): 117-123.  doi:10.11896/jsjkx.250300153
      Abstract ( 40 )   PDF(1688KB) ( 76 )   
      References | Related Articles | Metrics
      Loop splitting is a key compiler optimization technique that can reduce control overhead,improve instruction pipeline efficiency,and create opportunities for subsequent optimization.To address the limited applicability of the existing loop splitting strategy in GCC compiler,an improved algorithm is proposed based on static single assignment form.It categorizes condition variables by their positions in PHI nodes into loop-header PHI node condition variables and merge PHI node condition variables.For these two types of condition variables,the algorithm performs semi-invariance analysis separately and selects partitioning points accordingly,thereby achieving more general loop splitting optimization.The algorithm is implemented in the Sunway GCCcompi-ler.Experimental results on the new-generation Sunway processor platform show that,compared to the original algorithm,the proposed algorithm improves the performance of the 470.lbm benchmark in SPEC CPU 2006 test suite by 8.8%,and the 620.omnetpp_s benchmark in SPEC CPU 2017 test suite by 4.3%.This method expands the scope of optimizable loop structures,improves the optimization efficiency of the Sunway GCC compiler,and can provide assistance for the basic software ecosystem construction of the domestic Sunway platform.
      Parallel Detection Method of Maximum Floating-point Error Based on Gridding Particle SwarmOptimization Algorithm
      JI Liguang, ZHOU Bei, YANG Hongru, ZHOU Yuchang, CUI Mengqi, XU Jinchen
      Computer Science. 2026, 53 (2): 124-132.  doi:10.11896/jsjkx.250200014
      Abstract ( 38 )   PDF(2580KB) ( 69 )   
      References | Related Articles | Metrics
      Floating-point computing programs are widely used in aerospace,artificial intelligence,national defense and military,financial settlement and other fields.The computing accuracy and performance of floating-point programs are directly related to the safety and effectiveness of related applications.The maximum floating-point error is the key indicator to measure the accuracy of floating-point computing programs,and the cumulative effect of floating-point errors will also lead to unbearable disasters,so it is necessary to develop an accurate and efficient float-point maximum error detection tool to provide support for researchers to take timely optimization and intervention measures.The proposed algorithm transforms the problem of maximum error detection into the problem of searching for the maximum value of the objective function,gives full play to the computing power advantages of the master-slave architecture two-level parallel computing mode of the domestic Sunway platform,deeply excavates the performance and accuracy potential of the particle swarm heuristic search algorithm,and optimizes the particle swarm algorithm with the idea of grid search,independent cultivation,hierarchical convergence and dynamic adaptation.According to the different stages of the search process,the relevant search parameters are set,so that the improved algorithm achieves improvement in both search accuracy and search performance.This provides a new practical tool and thinking reference for accurately detecting the maximum error of floating-point numbers,and further enriches the tool library of domestic Sunway platform.
      Research on Fuzz Testing Techniques for Closed-source DBMSs Based on Black-box Instrumentation
      LI Zhongjie, LIANG Haotian, JIA Haoyang, WANG Qingxian , CAO Yan
      Computer Science. 2026, 53 (2): 133-144.  doi:10.11896/jsjkx.241200060
      Abstract ( 54 )   PDF(2345KB) ( 61 )   
      References | Related Articles | Metrics
      DBMSs are widely used application software for managing business data,and their security is critical.Any form of data leakage or corruption could lead to significant security issues.Currently,there are relatively few public research findings on vulnerability detection for closed-source DBMSs.To enable effective testing of closed-source DBMSs,a novel approach has been developed.It proposes methods based on grammar structure mutation and semantic rule-based variable filling to generate test datasets in batches,creating syntactically and semantically correct complex SQL queries from provided raw corpora.These inputs allow for in-depth exploration of the deep logic of DBMSs.Additionally,a dynamic coverage analysis method based on Pin is introduced to collect real-time coverage data for closed-source DBMSs,using feedback from the coverage to guide seed scheduling in fuzz testing.Based on these methods,an automated testing prototype tool for closed-source DBMSs,named OFuz,has been deve-loped.Experiments conducted on Oracle and SQL Server validate the effectiveness of OFuz,demonstrating superior performance in test dataset generation and coverage analysis compared to other tools.
      Database & Big Data & Data Science
      Contrastive Learning-based Masked Graph Autoencoder
      WANG Xinyu, SONG Xiaomin, ZHENG Huiming, PENG Dezhong, CHEN Jie
      Computer Science. 2026, 53 (2): 145-151.  doi:10.11896/jsjkx.250100155
      Abstract ( 57 )   PDF(1792KB) ( 71 )   
      References | Related Articles | Metrics
      MGAEs have gained significant attention due to their effectiveness in handling node classification tasks on graph-structured data.However,existing MGAE models face two main limitations during the pretraining of the encoder:semantic information loss,and similarity of embeddings for masked nodes.To mitigate these issues,this paper proposes a Contrastive Masked Graph Autoencoder model(CMGAE).Firstly,the masked graph and the original graph are separately fed into the online encoder and the target encoder to generate online embeddings and target embeddings,respectively.Then,an information supplementation module is employed to compare the similarity between the online embeddings and target embeddings,thereby recovering the lost semantic information.Simultaneously,the online embeddings are passed through a discriminator function and decoder.The discriminator function helps increase the variance of the embeddings for masked nodes,mitigating the issue of similar embeddings for masked nodes.The decoder reconstructs node features that are used to train the online encoder.Finally,the pretrained online encoder is utilized for node classification tasks.Node classification experiments are conducted on five transductive benchmark datasets and one inductive dataset.The results show that CMGAE achieves a transductive accuracy of 85.0%,73.6%,60.0%,50.5%,and 71.8% on the respective datasets,while the Micro-F1 score on the inductive dataset reaches 74.8%.These results demonstrate that CMGAE outperforms existing models.
      News Recommendation Algorithm Based on User Static and Dynamic Interests and DenoisedImplicit Negative Feedback
      WEI Jinsheng, ZHOU Su, LU Guanming , DING Jiawei
      Computer Science. 2026, 53 (2): 152-160.  doi:10.11896/jsjkx.241200177
      Abstract ( 44 )   PDF(2763KB) ( 77 )   
      References | Related Articles | Metrics
      In the existing news recommendation system,the news that users have not clicked on in the past is usually regarded as implicit negative feedback,and modelling the implicit negative feedback can guide the recommendation model to filter out the news that users are not interested in.Because there may be content of interest to users in the news that is not clicked,that is,there is preference noise,which leads to interference with modelling implicit negative feedback.In addition,due to the diversification and variability of user interests,the existing news recommendation system often has the problem of an “information cocoon”.To solve the above problems,this paper proposes a news recommendation algorithm based on users’ static and dynamic interests and the denoised implicit negative feedback.By fusing and modeling the static and dynamic interests of users,and the denoised implicit negative feedback of static and dynamic interests,the dynamically updated user preference model is constructed.Firstly,a static interest denoising module based on orthogonal mapping is constructed to denoise the implicit negative feedback in static interest.Then,the GRU and orthogonal mapping are fused to construct a dynamic interest denoising module based on the improved GRU,which fully models the user’s interest change and realizes the denoising of the implicit negative feedback in the dynamic interest.Finally,by introducing contrastive learning technology,the model’s ability to distinguish between implicit positive and negative feedback is enhanced to improve the performance of personalized news recommendations.Experiments on the MIND dataset show that compared with the baseline method,the model improves by 1.18%,1.84%,2.75% and 1.67% on the four evaluation in-dexes of AUC,MRR,NDCG@5 and NDCG@10,respectively,which verifies the effectiveness of the proposed model.
      Time-Frequency Attention Based Model for Time Series Anomaly Detection
      XU Jingtao, YANG Yan, JIANG Yongquan
      Computer Science. 2026, 53 (2): 161-169.  doi:10.11896/jsjkx.241200106
      Abstract ( 41 )   PDF(2931KB) ( 87 )   
      References | Related Articles | Metrics
      Time series anomaly detection is a challenging task due to complex temporal dependencies and limited labeled anomaly data.Previous methods have predominantly focused on modeling in the time domain,overlooking valuable information contained in the frequency domain,resulting in a certain degree of performance bottleneck.Taking this as a breakthrough,this paper proposes a Time-Frequency Attention Based Model for Time Series Anomaly Detection-TFA-TSAD,which firstly innovatively performs progressive decomposition of the input data to explore the anomalies of the data in different modes,and then utilizes the well-designed time-frequency domain modeling module to efficiently extract the time-domain information and frequency-domain information of the origin data respectively,by taking the attention mechanism to improve the performance of anomaly detection.Finally,based on the traditional error loss,a loss function incorporating average MSE is utilized to further improve model performance.Extensive experimental results on multiple datasets demonstrate that the proposed model outperforms 13 other benchmark models with a significant performance.
      D-LINet:Time Series Forecasting Framework Integrating Dual-linear Layersand Dual Normalization
      GENG Haijun, LI Dongxin
      Computer Science. 2026, 53 (2): 170-179.  doi:10.11896/jsjkx.250100137
      Abstract ( 48 )   PDF(2740KB) ( 76 )   
      References | Related Articles | Metrics
      Time series forecasting plays a crucial role in various real-world applications such as energy management,traffic flow forecasting,and meteorological analysis.However,the presence of distribution shift and long-term dependency in time series data continues to limit the performance of both traditional methods and existing deep learning models in long-range forecasting.To address these challenges,this paper proposes an innovative model named D-LINet.The proposed model integrates the distribution normalization capability of the Dish-TS framework with the efficiency of linear mappings.By employing dual-direction normalization and dual-linear-layer designs,it effectively mitigates distribution shifts in both input and output spaces,while significantly enhancing the capture of periodic and trend-related features.A comprehensive evaluation of D-LINet on multiple real-world datasets demonstrates that,for both short- and long-term forecasting,D-LINet consistently achieves lower MSE and MAE compared to mainstream models such as Transformer,Informer,Autoformer and DLinear.In addition,experiments investigate the influence of input window length and the incorporation of prior knowledge on forecasting performance,providing valuable insights for subsequent model optimization.Overall,this study offers a novel solution to address complex distribution shifts,contributing to improved accuracy and robustness in time series forecasting.
      Time Series Forecasting Model Integrating Multi-scale Features and Attention Mechanism
      PAN Jian, WANG Xuhao
      Computer Science. 2026, 53 (2): 180-186.  doi:10.11896/jsjkx.250100113
      Abstract ( 45 )   PDF(2028KB) ( 89 )   
      References | Related Articles | Metrics
      Currently,in the research of time series forecasting tasks,Transformer-based models primarily focus on extracting global and local features from time series data and improving attention mechanisms to reduce model complexity.However,exis-ting methods often overlook the different granularity features exhibited by time series at multiple scales.To address this issue,this paper proposes a time series forecasting model that integrates multi-scale features and the attention mechanism,called MTSformer.Firstly,by down-sampling the original sequence,multiple scale subsequences are obtained,enabling the model to integrate multi-scale feature information and enhance generalization ability.Then,a multi-prediction head structure is used to replace the traditional decoder,which improves prediction speed while reducing model complexity.Finally,experiments are conducted on five benchmark datasets,and the results show that compared with existing methods,the MTSformer model achieves ave-rage reductions of 24.51% in MSE and 17.84% in MAE for time series forecasting.
      Fine-grained Access Control Model for Big Data Based on Dynamic Data Sensitivity Levels
      ZHANG Huan, HOU Mingxing, LIU Guangna , SHI Ying
      Computer Science. 2026, 53 (2): 187-195.  doi:10.11896/jsjkx.251000127
      Abstract ( 63 )   PDF(2565KB) ( 86 )   
      References | Related Articles | Metrics
      Aiming at the problem that static access control model is difficult to adapt to data dynamics and context variability in big data environment,this paper proposes a fine-grained access control model based on dynamic data sensitivity level.The model first constructs a multi-dimensional quantitative assessment system to dynamically calculate the real-time sensitivity level of data by analyzing the data content,contextual environment and historical operation behaviors,which overcomes the rigidity of traditional static classification.On this basis,the dynamic sensitivity level is taken as the core decision attribute,and deeply integrated with the attribute-based access control model,a context-adaptive permission dynamic granting and revocation mechanism is designed,which realizes the precise control of different users’ access behaviors at different times,places and scenarios.Experimental results show that the model can effectively perceive the changes in data value and risk while ensuring low performance overhead.Compared with the traditional role-based access control model and the static attribute-based access control model,it significantly improves the accuracy and security of privilege assignment,and it is especially suitable for the big data application scenarios with frequent data flow and changing security requirements,which provides an effective way to build an intelligent and adaptive data security protection system.
      Data Placement Strategy Based on Erasure Code in Data Space
      LIN Bing, JIANG Haiou, TAN Xiao, CHEN Xing , ZHENG Yuheng
      Computer Science. 2026, 53 (2): 196-206.  doi:10.11896/jsjkx.241200199
      Abstract ( 54 )   PDF(3378KB) ( 74 )   
      References | Related Articles | Metrics
      In response to the multi-objective optimization layout problem of integrated data within scientific workflows in cloud-edge environments,factors such as data reliability,workflow execution latency,and data center load balancing are considered,and a data placement based on erasure coding within the data space is proposed.Firstly,low-storage-overhead erasure code redundancy technology is proposed to provide fault tolerance in scientific workflow execution,and a data space is constructed to manage the diverse data generated by the workflow.Secondly,an Interactive Multi-Objective Evolution Algorithm(IMOEA) is designed to simultaneously optimize execution latency and datacenter load balancing.By interacting with decision-makers,the algorithm generates solutions that better align with the decision-makers’ expectations,enhancing the personalization and acceptability of the optimization results.Experimental results show that for workflows of different scales and types,compared to other algorithms such as DIST,MOGA,and RAND,IMOEA reduces spatial metrics(Space,SP) by 2.3%~36.34%,15.71%~44.01%,and 22.50%~47.64%,and improves hypervolume metrics(Hypervolume,HV) by 7.84%~38.23%,14.65%~48.4%,and 45.01%~109.45%,respectively.Additionally,IMOEA algorithm effectively responds to decision-makers’ preferences,finding satisfactory data placement solutions.
      Anomaly Detection and Repair Methods for Dynamic Adjustment of Business Process
      LIU Fujie, FANG Xianwen
      Computer Science. 2026, 53 (2): 207-215.  doi:10.11896/jsjkx.241200037
      Abstract ( 50 )   PDF(3068KB) ( 79 )   
      References | Related Articles | Metrics
      In the wave of digital transformation,the anomaly detection and repair of business processes are crucial for ensuring the operational efficiency and decision-making quality of enterprises.Meanwhile,higher requirements are put forward for its detection and repair technologies.Traditional anomaly detection methods can no longer meet the needs of real-time monitoring and adaptive adjustment of current business processes.Most of the existing methods focus on static analysis and do not fully consider the complexity and variability of the business environment,so it is difficult to adapt to the needs of dynamic changes in processes.Based on this,this paper innovatively proposes the AAHM.This method improves the accuracy of anomaly detection and the effectiveness of repair through dynamic parameter adjustment and real-time data feedback.To verify the effectiveness of this me-thod,four groups of real event logs are used for simulation in the experiment.The results show that this method can effectively identify and repair abnormal behaviors and restore the normal execution of business processes through feature vector completion and behavior repair strategies.In addition,through post hoc test analysis of the experimental results,the effectiveness and rationality of the proposed method are further verified.
      Adaptive Data Stream Anomaly Detection Algorithm Based on Variable Density
      TANG Chenghai, YANG Yuqing, YANG Haifeng, CAI Jianghui, ZHOU Lichan
      Computer Science. 2026, 53 (2): 216-226.  doi:10.11896/jsjkx.241200044
      Abstract ( 95 )   PDF(2952KB) ( 79 )   
      References | Related Articles | Metrics
      Data stream is a kind of data with high generation rate and dynamic distribution characteristics.Its anomaly detection aims to find the data stream deviating from the expected behavior from this kind of data,so as to provide support for decision-making in many fields such as medical treatment,industrial production and finance.The existing data stream anomaly detection methods generally face the problems of high parameter sensitivity,high time and space overhead,and difficult threshold selection.In order to solve the above problems,this paper proposes an anomaly detection method based on variable density adaptive data stream.Firstly,VLOF is defined.VLOF measures the density distribution of data points by comparing their local reachable density and local anomaly factor changes under parallel neighborhood windows with different k values,and reduces the impact of inaccurate results caused by a single neighbor density measurement.Secondly,according to the relative growth rate and absolute mean rate of VLOF and k value,the dynamic change trend of data stream is reflected,and the data point adapted to this dynamic change trend is defined as the core point,and the judgment of subsequent normal points is accelerated through the core point.Finally,the relative growth rate and absolute mean rate are used as the measurement indicators of the theoretical distribution of data points,and the difference between the theoretical distribution and the actual distribution of new data points is calculated,so that the points deviating from the theoretical distribution can be identified as anomalies.In order to verify the effectiveness of the proposed algorithm,a comparison experiments are conducted with 8 algorithms under multiple UCI datasets and real datasets.The experimental results show that compared with the baseline models,the proposed method performs well in accuracy rate,recall rate and F1 performance indicators,and correspondingly improves time and space efficiency.
      Computer Grapnics & Multimedia
      Multimodal Visual Detection for Underwater Sonar Target Images
      HUANG Jing, WANG Teng, LIU Jian, HU Kai, PENG Xin, HUANG Yamin, WEN Yuanqiao
      Computer Science. 2026, 53 (2): 227-235.  doi:10.11896/jsjkx.241200082
      Abstract ( 111 )   PDF(4033KB) ( 80 )   
      References | Related Articles | Metrics
      Due to the limited underwater sonar image data and supervisory information for underwater targets,existing object detection algorithms are challenging to apply directly.To address this issue,this paper proposes an open-set underwater sonar image object detection method,USD(Underwater Sonar Detection),based on DETR.In the cross-modal feature fusion encoding module,it employs a multi-scale deformable attention mechanism to process image features iteratively,enabling the network to selectively focus on important information while reducing computational load.Simultaneously,it designs a multi-head self-attention mechanism to iterate text features,enhancing the model’s global modeling capability for sequences.Next,it utilizes a bidirectional attention mechanism to fuse text and image features,emphasizing the bidirectional relationships within the input sequences and enabling the network to capture more complex text-image interactions.Additionally,in the image-text feature decoding module,it uses image features to initialize queries,which are output from the Encoder module,and applies the DN method to address the issue of slow model convergence during training.Experiments show that the proposed method achieves a mean average precision of 77.5% on a custom underwater sonar image dataset,outperforming other detection methods in terms of precision,meanwhile successfully implements open-set object detection with robust performance.
      Constrained Multi-loss Video Anomaly Detection with Dual-branch Feature Fusion
      HAN Lei, SHANG Haoyu, QIAN Xiaoyan, GU Yan, LIU Qingsong, WANG Chuang
      Computer Science. 2026, 53 (2): 236-244.  doi:10.11896/jsjkx.250300103
      Abstract ( 63 )   PDF(2706KB) ( 78 )   
      References | Related Articles | Metrics
      To address the significant impact of spatiotemporal correlation learning on video anomaly detection performance,this paper proposes a dual-branch feature fusion-based constrained multi-loss video anomaly detection method(DBF-CML-transMIL).This method considers the saliency and correlation of segments in multiple instance learning(MIL),utilizing a multi-layer linear neural network to learn the spatial saliency features of each segment.A cascaded Transformer fusion module is designed to capture multi-level temporal correlations among instances.Then,a multi-loss model is employed to perform supervised learning on the fused features,enriching prediction diversity.To address the discreteness issue of the existing top-k method,a constrained sliding window top-k mechanism is introduced to enhance the correlation of anomalous events.Comparative and ablation experiments conducted on the ShanghaiTech and UCF-Crime datasets demonstrate that DBF-CML-transMIL achieves AUC scores of 97.33% and 83.82%,respectively.Furthermore,each module effectively enhances the performance of video anomaly detection.
      Attention-based Audio-driven Digital Face Video Generation Method
      GUO Xingxing, XIAO Yannan, WEN Peizhi, XU Zhi, HUANG Wenming
      Computer Science. 2026, 53 (2): 245-252.  doi:10.11896/jsjkx.241200067
      Abstract ( 88 )   PDF(3346KB) ( 70 )   
      References | Related Articles | Metrics
      The key challenge in audio-driven digital face video generation lies in aligning the information from two different modalities,audio and video,to achieve lip synchronization.Existing technologies have primarily been developed using English datasets.However,due to the phonetic differences between Chinese and English,directly applying these methods to Chinese audio-driven face video generation results in issues such as blurred teeth and insufficient video clarity.This paper proposes M-CSAWav2Lip,an audio-driven digital face video generation method based on a GAN framework and enhanced by an attention mechanism.The method combines MFCC and Mel Spectrograms for audio feature extraction.By leveraging the temporal dynamics of MFCC and the frequency resolution of Mel Spectrograms,the method captures subtle variations in speech information comprehensively.During the digital face generation process,a network architecture based on attention mechanisms and residual connections is employed.This architecture uses weighted channel and spatial attention mechanisms to enhance the importance of features,improving the ability to extract key audio and video features.This allows for the effective encoding and fusion of Chinese audio-video information,generating lip movements and facial videos that are consistent with the audio content.Finally,the model is trained and tested on both a custom Chinese dataset and a general dataset.Experimental results demonstrate that the generated lip-synced digital face videos show improvements in both accuracy and quality.
      Semantic-guided Hybrid Cross-feature Fusion Method for Infrared and Visible Light Images
      JI Sai, QIAO Liwei, SUN Yajie
      Computer Science. 2026, 53 (2): 253-263.  doi:10.11896/jsjkx.250100123
      Abstract ( 64 )   PDF(5177KB) ( 72 )   
      References | Related Articles | Metrics
      To address the difficulty of self-encoder image fusion algorithms in highlighting infrared(IR) salient targets and the challenge of simultaneously considering global structure and local detail information in existing fusion strategies-while most algorithms overly prioritize statistical metrics and overlook support for advanced visual tasks-a semantic segmentation-guided image fusion method with a hybrid cross-feature mechanism is proposed.Shallow and deep skip connections are introduced between the encoder and decoder,employing a maximum value selection strategy to emphasize salient targets and reduce redundancy.The fusion strategy integrates global context and local fine-grained information through cross-attention and convolutional operations,combining different modal features within a single frame.The fused image is then fed into a segmentation network,where semantic loss guides high-level semantic information back to the fusion network,enabling the generation of a fused image rich in semantic detail.Experimental results demonstrate that the proposed method achieves average improvements of 33.93%,112.81%,49.89%,27.64%,and 23.87% in SD,MI,VIFF,Qabf,and AG metrics on the RoadScene dataset compared to seven baseline algorithms.Additionally,the intersection and concurrency ratios for car,person,and bicycle categories in the semantic segmentation task on the MSRS dataset increase by 3.47%,6.37%,and 9.57% on average,outperforming other state-of-the-art methods.
      Boundary-focused Multi-scale Feature Fusion Network for Stroke Lesion Segmentation
      LIU Chenhong, LI Fenglian, YANG Jia, WANG Suzhe, CHEN Guijun
      Computer Science. 2026, 53 (2): 264-272.  doi:10.11896/jsjkx.250300137
      Abstract ( 43 )   PDF(4075KB) ( 78 )   
      References | Related Articles | Metrics
      Computer-aided diagnosis helps clinicians locate stroke-affected brain regions,improving diagnostic and therapeutic efficiency.Currently,the boundaries between stroke lesions and healthy tissues in medical images are often unclear,and most exis-ting deep learning-based segmentation methods lack effectiveness in identifying small-sized lesions and handling blurred boundaries.To address this,the boundary-aware multi-scale feature integration network(BAMFNet) is proposed for more accurate stroke lesion segmentation.In BAMFNet,the multi-scale feature extraction module combines convolutional neural networks and Transformers to capture local and global features at multiple scales and uses involution to reduce information redundancy.The boundary enhancement and fusion module strengthens boundary-region features during fusion and integrates a multi-level information interaction mechanism.This enhances the boundary feature representation and combines deep and shallow features effectively.Experiments on the ATLAS v1.2,ATLAS v2.0 and ISLES 2022 stroke datasets show BAMFNet achieves Dice similarity coefficients of 62.93%,61.79%,and 86.66% respectively,outperforming other methods.
      Artificial Intelligence
      Survey on Complex Logical Query Methods in Knowledge Graphs
      CHEN Yuyin, LI Guanfeng, QIN Jing, XIAO Yuhang
      Computer Science. 2026, 53 (2): 273-288.  doi:10.11896/jsjkx.250400033
      Abstract ( 63 )   PDF(3089KB) ( 100 )   
      References | Related Articles | Metrics
      CLQA as a technique for deeply mining the underlying logical relationships within knowledge graphs,aims to accurately respond to complex queries through reasoning from existing facts.This technology occupies an important position in the field of knowledge graph research and has demonstrated significant advantages in various application scenarios such as semantic search and recommendation systems,effectively promoting the widespread application and in-depth development of knowledge graphs in the field of artificial intelligence.However,current research on complex logic query techniques is still scattered,especially a systematic review of integrating large language models is particularly lacking.In light of this,this paper delves into complex logical querying techniques encompassing four major categories:geometric objects,probability distributions,fuzzy logic,and large language models.It comprehensively reviews existing models and systematically summarizes the typical datasets and evaluation me-trics employed by these methods.Building on this foundation,the paper further analyzes the strengths and limitations of each method,aiming to provide comprehensive and in-depth insights into the development of complex logical querying techniques.Finally,the paper identifies the challenges currently faced by complex logical querying technologies and discusses potential research directions,offering valuable insights for future technological innovation and development.
      Chinese Hate Speech Detection Incorporating Hate Object Features and Variant Word Restoration Mechanism
      SUN Mingxu, LIANG Gang, WU Yifei, HU Haixin
      Computer Science. 2026, 53 (2): 289-299.  doi:10.11896/jsjkx.241200004
      Abstract ( 575 )   PDF(3281KB) ( 89 )   
      References | Related Articles | Metrics
      The rise of online hate speech and its significant societal harms have made automatic hate speech detection a critical task.Existing methods overlook the impact of hate objects on semantic extraction for hate speech detection,leading to inadequate contextual feature extraction and susceptibility to decision errors induced by specific expressions.Meanwhile,these methods fail to consider the interference of variant words on semantic extraction,resulting in a high miss rate in hate speech detection.Furthermore,the field of Chinese hate speech detection lacks the support of available datasets.To tackle these challenges,this paper proposes a hate speech detection method incorporating hate object features and variant word restoration mechanism.The method treats hate object recognition as an intermediate task,guiding the model to fully learn the contextual features of hate objects,thereby enhancing text comprehension in hate speech detection.Additionally,a variant word restoration module fine-tuned based on ChatGLM2-6B is proposed.It aims to effectively reduce the interference of variant words on hate speech detection by restoring variant words to their normal equivalents.Finally,a Chinese hate speech dataset is also presented to facilitate further research in this field.Experimental results verify that the proposed method achieves a 96.71% F1 score,outperforming baseline methods in all metrics.Specifically,the model exhibits a 4.21% improvement in detection accuracy for specific scenes and a 3.45% decrease in the miss rate caused by variant words.
      Dynamic Interaction Dual-channel Graph Attention Network for Chinese and English SarcasmDetection
      TAN Pingping, XU Ji, LI Yijun, WANG Hai
      Computer Science. 2026, 53 (2): 300-311.  doi:10.11896/jsjkx.250500015
      Abstract ( 33 )   PDF(5172KB) ( 84 )   
      References | Related Articles | Metrics
      Due to the complexity of Chinese semantics and the nuanced expression of emotions,Chinese text sarcasm detection presents a challenging task.Existing sarcasm detection methods are predominantly developed for English and struggle to adapt to the unique expressions and cultural connotations of Chinese.Therefore,this paper proposes a novel dynamic interaction dual-channel graph attention network(DiDu-GAT),which utilizes a unique dual-channel structure to analyze syntactic dependencies and emotional features in texts.DiDu-GAT incorporates a dynamic interaction mechanism to enhance its cross-channel learning capabilities,enabling comprehensive extraction of emotional information and syntactic patterns,thereby significantly improving the accuracy of Chinese sarcasm detection.Experimental results on the HIT Chinese sarcasm dataset(GuanSarcasm) and two public English sarcasm datasets(IAC-V1 and IAC-V2) demonstrate that the proposed method significantly outperforms existing baseline methods across key performance metrics,validating its effectiveness and superiority in both Chinese and English sarcasm detection tasks.
      Industrial Text Classification for Chinese and Vietnamese Based on Prompt Learning and AdaptiveLoss Weighting
      CHEN Lin, MA Longxuan, ZHANG Yongbing, HUANG Yuxin, GAO Shengxiang, YU Zhengtao
      Computer Science. 2026, 53 (2): 312-321.  doi:10.11896/jsjkx.250300038
      Abstract ( 35 )   PDF(4909KB) ( 81 )   
      References | Related Articles | Metrics
      Cross-border industrial text classification is a fundamental task that supports big data analysis in cross-border industries.With the rapid growth of cross-border industrial data in Southeast Asia,there is an increasing demand for the analysis and processing of industrial data,particularly with respect to industrial text classification.However,cross-border industrial text classification faces several challenges,including linguistic differences across languages,data imbalance among languages,and the scarcity of annotated data.These issues are particularly pronounced in low-resource languages,making cross-border industrial data classification more difficult.To address this issue,this paper proposes a few-shot cross-border industrial text classification method based on prompt learning,combined with an adaptive loss weighting strategy,which significantly enhances the model's classification performance in cross-border scenarios.Specifically,the proposed model mitigates the issue of data scarcity within the prompt-learning framework by leveraging the prior knowledge of pre-trained models to enhance few-shot learning capabilities.Furthermore,cross-lingual text pairs are constructed to facilitate knowledge transfer and semantic alignment in semantic space.Addi-tionally,an innovative dynamic hybrid loss function is designed,integrating cross-entropy loss,focal loss,and label smoothing loss in a multi-objective optimization framework.The loss terms are dynamically weighted based on an uncertainty-based weighting mechanism:cross-entropy loss ensures fundamental classification capability,focal loss enhances the focus on hard-to-classify samples,and label smoothing effectively mitigates the risk of overfitting.Experimental results demonstrate that the proposed method significantly outperforms existing mainstream approaches in cross-border Chinese and Vietnamese industrial text classification tasks,particularly in few-shot learning scenarios with data scarcity and language imbalance.This approach provides an efficient solution and offers new research perspectives for processing low-resource languages.
      Method for Span-level Sentiment Triplet Extraction by Deeply Integrating Syntactic and Semantic
      Features
      CHANG Xuanwei, DUAN Liguo, CHEN Jiahao, CUI Juanjuan, LI Aiping
      Computer Science. 2026, 53 (2): 322-330.  doi:10.11896/jsjkx.250100061
      Abstract ( 45 )   PDF(2612KB) ( 79 )   
      References | Related Articles | Metrics
      Aspect sentiment triple extraction aims to extract aspects and their corresponding opinion words and sentiment polarities in the form of triples from sentences.Existing extraction models suffer from issues such as insufficient exploitation of syntactic and semantic information in sentences and incorrect identification of multi-word entity boundaries.To address these issues,this paper proposes a span extraction model that deeply integrates syntactic and semantic features(Span Extractor Incorporating Semantic and Syntax Features,SESS).SESS combines self-attention mechanisms with multi-channel graph convolutional networks to deeply explore the associations between syntactic and semantic features,enhancing the model’s ability to handle complex sentence structures and multi-word entities.Additionally,the model employs a span-based extraction method to extract aspect and opinion words,capturing the overall semantics of long entities and reducing sentiment inconsistency issues.The experiments conducted on the standard dataset ASTE-Data-V2 demonstrate that SESS outperforms the vast majority of comparison models in terms of F1 score,particularly in processing complex sentences and one-to-many,many-to-one sentiment relationships.Furthermore,ablation experiments and case analysis validate the effectiveness of each module of the model and its contribution to task performance,further proving the advancement and robustness of the proposed method.
      Background Structure-aware Few-shot Knowledge Graph Completion
      ZHANG Jing, PAN Jinghao, JIANG Wenchao
      Computer Science. 2026, 53 (2): 331-341.  doi:10.11896/jsjkx.250100107
      Abstract ( 75 )   PDF(2842KB) ( 108 )   
      References | Related Articles | Metrics
      Few-shot knowledge graph completion aims to predict unseen facts in long-tail relationships within knowledge graphs using only a small number of reference data.The key challenge of this task lies in how to efficiently encode entity and relation features under conditions of data scarcity,and to construct an effective triplet scoring function.Existing few-shot knowledge graph completion models generally overlook the impact of entity pair contextual information on both entity encoding and the scoring function,while also suffering from insufficient relation representation learning.To address these issues,this paper proposes a background-structure-aware few-shot knowledge graph completion model—BSA.Firstly,it designs a metric for entity pair contextual interaction,which guides the model to focus attention on neighbor nodes that are structurally similar to the central entity by measuring the structural influence of neighboring entities,thereby reducing the negative impact of noisy neighbors.Secondly,during the relation representation learning phase,it incorporates background relation information from the knowledge graph that is semantically and structurally similar to the target relation to enhance its embedding representation.Finally,it introduces a contextual interaction metric for the head-tail entity pair in the triplet scoring function to improve the model’s reasoning capability for complex relations.Experimental results show that,compared to the best results from baseline models,the BSA model improves MRR,Hit@5,and Hit@1 by 0.4 percentage points,0.8 percentage points,and 0.5 percentage points percentage points on the NELL-One dataset,respectively,and improves MRR,Hit@10,and Hit@5 by 1.9 percentage points,2.2 percentage points,and 2.2 percentage points on the Wiki-One dataset,respectively,demonstrating the effectiveness and feasibility of the proposed method.
      Human Motion Recognition Algorithm Based on Wearable Sensors
      JIANG Lei, WANG Zi, YANG Rong, HAN Wanglin
      Computer Science. 2026, 53 (2): 342-348.  doi:10.11896/jsjkx.241200083
      Abstract ( 45 )   PDF(3094KB) ( 71 )   
      References | Related Articles | Metrics
      Against the backdrop of global aging,knee exoskeletons are widely used in the maintenance of knee joint health and rehabilitation training for the elderly.Knee exoskeletons often employ embedded devices for the recognition of lower limb motion states,which requires finding a balance between the selection and layout of sensors,and the accuracy and computational complexity of algorithms.Therefore,this paper studies a human motion recognition algorithm suitable for knee exoskeletons,which uses two inertial measurement units(IMUs) on the thigh and lower leg to collect lower limb motion data.The recognition method includes three steps:feature combination,feature selection,and motion state recognition.It optimizes feature representation through cross-method feature combination.The improved One-vs.-Rest(OvR) strategy is applied to address motion recognition issues,within which a fusion algorithm combining Relief with Pearson correlation coefficients and a machine learning backward selection method is used for feature selection to reduce computational complexity,and historical information along with other state data are integrated into the model training to further enhance accuracy.The model classifies six types of daily motion states with an accuracy rate of up to 97.76%.Experimental results verify that the proposed algorithm can accurately and quickly recognize lower limb motion states under the limitation of a limited number of sensors,providing an accurate and low computational requirement solution for precise detection and real-time control of knee exoskeletons.
      Evolutionary Multi-task Optimization Algorithm Based on Transfer Knowledge Selection and Population Reduction
      LI Erchao, HUANG Pengfei
      Computer Science. 2026, 53 (2): 349-357.  doi:10.11896/jsjkx.250600197
      Abstract ( 57 )   PDF(2338KB) ( 62 )   
      References | Related Articles | Metrics
      Evolutionary multi-task optimization has emerged as one of the research hotspots in the field of computational intelligence in recent years,with its principle being to enhance the efficiency of algorithms in simultaneously solving multiple tasks through knowledge transfer between tasks.Since improper selection of transfer knowledge can reduce positive knowledge transfer between tasks,how to appropriately select transfer knowledge has become a key research direction.Additionally,during the algorithm’s evolutionary process,single-layer population reduction is insufficient to sustain the algorithm’s efficient optimization performance over the long term.Based on this,this paper proposes an evolutionary multi-task optimization algorithm(MTDE-MCT) based on transfer knowledge selection and population reduction.Firstly,the task population is initialized,and fitness evaluation is conducted,utilizing a combined index based on Manhattan distance and fitness values for the selection of transfer knowledge.Next,a subpopulation alignment strategy is applied to eliminate feature differences in transfer individuals between tasks.Finally,a multi-layer population reduction strategy is proposed,which linearly reduces the task population size based on the algorithm’s evolutionary stage.To validate the performance of the proposed algorithm,comparisons are made with classic algorithms from recent years using the CEC2017 and WCCI2020 problem test sets.The experimental results demonstrate that the proposed algorithm exhibits strong competitiveness in solving multi-task optimization problems.
      Fast Consensus Seeking in Distributed Multi-agent System Using Topology Virtual Structural Hole Node
      XIE Guangqiang, QIU Fengyang, LI Yang
      Computer Science. 2026, 53 (2): 358-366.  doi:10.11896/jsjkx.241200109
      Abstract ( 51 )   PDF(5595KB) ( 89 )   
      References | Related Articles | Metrics
      Distributed agent consensus seeking is a significant problem in the research of MASs.The theory of structural holes in social networks shows that nodes occupying holes in the network can promote information fusion and expedite collaboration.However,leveraging topology structural hole information to hasten system consensus in distributed switching topology scenarios poses a challenge.In addition,virtual leaders possess the advantages of guidance,obstacle avoidance,and assistance in achieving desired objectives,which are widely used in consensus tracking.Inspired by this,this paper proposes a topology virtual structural hole construction consensus model(VSHCC).Firstly,an important node evaluation strategy associated with each element(point,edge,and clique) of the topology is designed to quantify the importance of nodes and distinguish important nodes from multiple perspectives.Secondly,the construction method of the virtual structural hole node is proposed to fuse the important nodes’ information.Then a consensus evolution rule is designed for the virtual structural hole node so that the agent can evolve towards a highly favorable position and accelerate the convergence process.In addition,a geometric constraint set based on the acute-angle test graph(AATG) is introduced to ensure connectivity and appropriately expand the constraint set to speed up convergence.Experimental simulations show that the proposed algorithm can accelerate the consensus speed of MAS and enhance the consistency of system.
      Computer Netword
      Review of Offloading Technologies Research in Mobile Edge Computing
      HUAN Haisheng, ZHAO Peng, CHEN Nuo, KA Zuming
      Computer Science. 2026, 53 (2): 367-378.  doi:10.11896/jsjkx.250100058
      Abstract ( 80 )   PDF(2421KB) ( 75 )   
      References | Related Articles | Metrics
      With the continuous evolution of the IoT and 5G technologies,data traffic has shown an unprecedented explosive growth trend.Against this backdrop,the traditional centralized cloud computing model can no longer meet the requirements of low latency and low energy consumption for terminal data processing.MEC can provide immediate services at the source of data generation,has gradually become the most optimal solution to this problem.As a core component of MEC technology,the performance of computing offloading is affected by various factors.Optimizing computing offloading to enhance performance has become the focus of global researchers’ attention.This paper aims to deeply explore the computing offloading technology of MEC.Firstly,it reviews the development history of MEC and elaborates on the concepts and architectures of MEC and computing offloading.Secondly,starting from the practical applications of computing offloading technology in different scenarios,it discusses the current research status and introduces the optimization technologies related to computing offloading.Finally,it analyzes the challenges faced by computing offloading technology and looks ahead to the future research directions of MEC computing offloa-ding technology,so as to point out the direction for subsequent research work.
      Load Balancing Task Allocation Strategy for User-oriented Mobile Crowdsensing
      LI Fan, WU Yahui, DENG Su, MA Wubin, ZHOU Haohao
      Computer Science. 2026, 53 (2): 379-386.  doi:10.11896/jsjkx.241100196
      Abstract ( 40 )   PDF(3264KB) ( 81 )   
      References | Related Articles | Metrics
      In mobile crowdsensing systems,the user’s participation intention and experience have an important impact on the overall performance and long-term sustainable operation of the system.Most existing user-oriented task allocation strategies only consider the cost-benefits of users and ignore the load balancing issue in the task allocation process,resulting in the premature exit of some key nodes due to heavy loads,which affects the long-term performance of the system.Therefore,this paper proposes a user-centered,long-time dynamic task allocation model.Aiming at the dynamics and persistence of the model,a solution algorithm based on improved Lyapunov optimization theory is proposed,which simultaneously considers the dual optimization of overall system benefits and load balancing,achieving optimal task allocation with load balancing constraints in dynamic environments.Experimental results demonstrate that the proposed algorithm improves the load balancing of users by nearly 20% under the pre-mise of ensuring queue stability and optimal overall system benefits.
      Multi-objective Optimization for Virtual Machine Placement in Large-scale Hadoop Cluster
      WEN Jia, WU Shuxia, YU Zhengxin, MIAO Wang, CHEN Zheyi
      Computer Science. 2026, 53 (2): 387-395.  doi:10.11896/jsjkx.241200020
      Abstract ( 34 )   PDF(3827KB) ( 77 )   
      References | Related Articles | Metrics
      Virtualization technology has become the core support for the rapid development of cloud computing.As a popular distributed framework in cloud environments,the performance of the Hadoop cluster is usually limited by the low efficiency of resource management.With the increasing data volume and cluster scale,it is challenging to efficiently optimize Virtual Machine(VM) placement in the Hadoop cluster to reduce energy consumption,increase resource utilization,and lessen file access latency.To address this important challenge,this paper proposes a novel Multi-objective Optimization with Variable Length Double chromosome(MO-VLD) method for VM placement in the large-scale Hadoop cluster.Firstly,a double chromosome structure is designed by combining the variable length chromosome with NSGA-III.Next,two-stage crossover and mutation operations are introduced to enhance the exploration diversity of solution space.Using the real-world runtime datasets of the Google cluster,extensive simulation experiments demonstrate that the proposed MO-VLD method can effectively handle the dynamic resource demands and improve the resource management efficiency of the Hadoop cluster.Compared to benchmark methods,the MO-VLD method shows superior performance in terms of energy consumption,resource utilization,and file access latency.
      Game Theory-based Optimization of Flight Paths and Task Offloading in UAV-assisted MECSystems
      WEI Manyi, WANG Gaocai, WEN Yihu
      Computer Science. 2026, 53 (2): 396-405.  doi:10.11896/jsjkx.250300088
      Abstract ( 69 )   PDF(2883KB) ( 77 )   
      References | Related Articles | Metrics
      In traditional MEC systems,fixed MEC server deployments are susceptible to communication blockages due to multipath and non-line-of-sight(NLOS) effects.UAV-assisted MEC has emerged as a solution.This paper proposes a UAV-assisted MEC system with a clustering-based flight path optimization method to extend the UAV’s operational time and the network lifetime.K-means clustering partitions users into clusters,and the shortest UAV flight path problem among cluster centers is formulated as a TSP and solved using Chaos Game Optimization(CGO) combined with 2-Opt.A potential game-based task offloading strategy is then designed to optimize system energy consumption and latency from the users’ perspective.The global optimization problem is modeled as a potential game with at least one Nash Equilibrium(NE),which corresponds to the global optimal solution.Experimental results show that the proposed methods effectively minimize system energy consumption and latency while ensuring complete service coverage.
      Deep Reinforcement Learning-based Aircraft Task Offloading in Low Earth Orbit Satellite Networks
      LI Fang, YUAN Baochun, SHEN Hang, WANG Tianjing, BAI Guangwei
      Computer Science. 2026, 53 (2): 406-415.  doi:10.11896/jsjkx.250200092
      Abstract ( 66 )   PDF(4154KB) ( 61 )   
      References | Related Articles | Metrics
      LEO satellite communication has the advantages of long transmission distance,wide coverage,and is not restricted by terrain.It has become an important communication method for the civil aviation transportation and general aviation industries.However,the low-orbit satellite network is a highly heterogeneous and dynamic environment.The mobility of satellite nodes,the complexity of communication links,uneven spatial and temporal distribution of aircraft,and the coexistence of multiple services make task offloading and resource allocation face many challenges.To this end,this paper proposes an aircraft task offloading method based on DDRL,with the purpose of maximizing the overall effectiveness of the system.Firstly,the system utility maximization problem is modeled as a joint optimization problem of task offloading and resource allocation,taking into account the computing power and coverage time of LEO satellites.Next,the problem is transformed into a Markov decision process,using DDQN algorithm to learn the optimal task offloading decision,and based on this,TD3 is used to obtain the optimal resource allocation strategy.Simulation experiments show that under different computing resources and communication resources,the proposed scheme is better than other benchmark schemes in terms of system utility,proving the usability of the proposed framework.
      Energy-efficiency RoI Slicing Capturing Task Scheduling Scheme for LEO Satellites
      GAO Peize, TIAN Lifeng, LI Yuepeng, ZENG Deze, ZHONG Liang , GONG Wenyin
      Computer Science. 2026, 53 (2): 416-422.  doi:10.11896/jsjkx.250200054
      Abstract ( 35 )   PDF(3357KB) ( 91 )   
      References | Related Articles | Metrics
      With the rapid advancement of LEO satellite technology,LEO satellites equipped with high-resolution,adjustable ca-meras have become essential for complex EOMs.These missions often require multi-satellite collaboration to capture multiple RoI.Unlike traditional single-satellite capture methods,which focus solely on the energy consumption of image capturing multi-satellite collaboration involves frequent camera angle adjustments to ensure complete RoI coverage,leading to significant energy consumption for camera rotations.The allocation of RoI slicing capturing tasks is challenging,as it must balance the energy consumption of both camera rotation and image capturing.This paper addresses the RoI slicing capturing task allocation problem by considering the orbital directions of satellites and the trade-off between energy consumption from camera rotation and capturing.The objective is to achieve full RoI coverage while minimizing the total energy consumption of the capturing tasks.To this end,this paper proposes ERSCTS,an Energy-efficient RoI Slicing Capturing Task Scheduling algorithm tailored for heterogeneous LEO satellites.Through comprehensive comparative experiments with traditional satellite task scheduling algorithms,it demonstrates that the ERSCTS algorithm significantly reduces satellite energy expenditure.Experimental results show that ERSCTS achieves an average energy consumption reduction of 24.5% while ensuring complete RoI coverage.
      Information Security
      Heterogeneous Graph Attention Network-based Approach for Smart Contract Vulnerability
      Detection
      LI Chengyu, HUANG Ke, ZHANG Ruiheng , CHEN Wei
      Computer Science. 2026, 53 (2): 423-430.  doi:10.11896/jsjkx.241200144
      Abstract ( 39 )   PDF(1646KB) ( 65 )   
      References | Related Articles | Metrics
      Security vulnerabilities in smart contracts on blockchain platforms such as Ethereum have long been a focus of industry attention.Bytecode analysis and vulnerability detection have become one of the mainstream approaches for identifying smart contract vulnerabilities.However,traditional methods,such as symbolic execution,rely on predefined vulnerability rules,leading to inefficiencies and low precision.Deep learning-based methods,on the other hand,lack a comprehensive understanding of bytecode semantics and struggle to simultaneously filter noise generated during the compilation process while capturing complete control flow and data flow information.To address these challenges,this paper proposes a novel method for constructing critical semantic graphs to detect smart contract vulnerabilities.Firstly,a set of specific denoising preprocessing rules is defined to remove irrelevant data while preserving key semantic information related to vulnerabilities.Next,a heterogeneous graph representation method is introduced to capture rich program semantics.Finally,a vulnerability detection model based on the HAN is designed.Experimental results demonstrate that the proposed method outperforms existing approaches for smart contract vulnerability detection.For denial of service,integer overflow,timestamp dependency,and unchecked function return value vulnerabilities,the F1 scores of the proposed method are improved by 17.75,5.94,28.94,and 27.85 percentage points,respectively.
      Weakly-decentralized Scheme for Sensitive Data Sharing with Hierarchical Access Control
      ZHENG Kaifa, SUN Wei, ZHOU Junxu, WU Yunkun, XU Zhen, LIU Zhiquan , HE Qiang
      Computer Science. 2026, 53 (2): 431-441.  doi:10.11896/jsjkx.250900047
      Abstract ( 39 )   PDF(3154KB) ( 70 )   
      References | Related Articles | Metrics
      In distributed application scenarios such as cloud-edge collaboration,achieving efficient,searchable,and decentralized fine-grained access control for sensitive data sharing presents a core challenge.Traditional schemes are often hindered by high computational overhead,a lack of ciphertext retrieval functionality,and the inherent security risks of centralized architectures.Therefore,this paper proposes a hierarchical access control scheme for sensitive data sharing in a semi-decentralized manner(HAC-SDS).Firstly,by employing a cloud-edge-device collaborative computing model,the scheme offloads significant computational and storage burdens from the client-side to cloud and edge servers,effectively reducing overhead.Secondly,an encrypted inverted index is constructed to support fast and fine-grained ciphertext retrieval,which is integrated with an attribute revocation and dynamic update mechanism to significantly enhance efficiency.Finally,blockchain technology is applied to key management,its decentralized nature fundamentally eliminates the single-point bottleneck and trust risks inherent in traditional centralized solutions.Security analysis demonstrates that the ciphertext achieves indistinguishability,thereby effectively guaranteeing data confidentiality.Experimental results confirm that the proposed ciphertext retrieval scheme is both efficient and practical for real-world applications.
      Cloud Email Defense Resource Allocation Method Based on User Behavior
      ZHANG Wanyou, SONG Lipeng
      Computer Science. 2026, 53 (2): 442-453.  doi:10.11896/jsjkx.250300041
      Abstract ( 54 )   PDF(4768KB) ( 90 )   
      References | Related Articles | Metrics
      In recent years,with the widespread adoption of cloud-based email applications,the number of users has steadily increased,and security threats such as phishing attacks have become more prevalent.Effective defense resource allocation has thus become crucial for ensuring the stable operation of cloud email systems.However,existing resource allocation methods often fail to adequately consider factors such as user behavior,the interrelationships between multiple cloud nodes,and lateral phishing attacks,leading to inefficiencies in resource utilization and suboptimal defense performance.To address these issues and enhance the security and resource utilization of cloud email nodes,this paper proposes a user behavior-based defense resource allocation me-thod.Firstly,a risk assessment model for cloud email nodes is developed,which comprehensively evaluates the success rate of phishing attacks and the cloud risks associated with both individual nodes and multiple interconnected nodes.Next,dynamic defense resource allocation algorithms are designed for both individual nodes and collaborative resource allocation across multiple interlinked nodes.These algorithms adjust resource allocation strategies in real-time based on factors such as user login probabilities,trust relationships,behavior patterns,the available defense resources at each node,and the current threat landscape.Experimental results show that,compared to existing methods,the proposed approach enables collaborative resource allocation,improves utilization,achieves the lowest system loss,and offers a better solution for cloud email node defense resource allocation.
  • [an error occurred while processing this directive]
  • 2026,53 (2) 
  • 2026,53 (1) 
  • 2025,52 (12) 
  • 2025,52 (11A) 
  • 2025,52 (11) 
  • 2025,52 (10) 
  • 2025,52 (9) 
  • 2025,52 (8) 
  • 2025,52 (7) 
  • 2025,52 (6A) 
  • 2025,52 (6) 
  • 2025,52 (5) 
  • 2025,52 (4) 
  • 2025,52 (3) 
  • 2025,52 (2) 
  • 2025,52 (1) 

  • More>>
  • Robust Hash Learning Method Based on Dual-teacher Self-supervised Distillation (3839)
    MIAO Zhuang, WANG Ya-peng, LI Yang, WANG Jia-bao, ZHANG Rui, ZHAO Xin-xin
    Computer Science. 2022, No.10:159-168
    Abstract (3839) PDF (4472KB) (15892)
    Data-free Model Evaluation Method Based on Feature Chirality (2438)
    MIAO Zhuang, JI Shipeng, WU Bo, FU Ruizhi, CUI Haoran, LI Yang
    Computer Science. 2024, No.7:337-344
    Abstract (2438) PDF (3883KB) (15460)
    Review of Time Series Prediction Methods (4542)
    YANG Hai-min, PAN Zhi-song, BAI Wei
    Computer Science. 2019, No.1:21-28
    Abstract (4542) PDF (1294KB) (12775)
    Polynomial Time Algorithm for Hamilton Circuit Problem (6466)
    JIANG Xin-wen
    Computer Science. 2020, No.7:8-20
    Abstract (6466) PDF (1760KB) (12593)
    Web Application Page Element Recognition and Visual Script Generation Based on Machine Vision (1059)
    LI Zi-dong, YAO Yi-fei, WANG Wei-wei, ZHAO Rui-lian
    Computer Science. 2022, No.11:65-75
    Abstract (1059) PDF (2624KB) (12511)
    Optimization Method of Streaming Storage Based on GCC Compiler (1556)
    GAO Xiu-wu, HUANG Liang-ming, JIANG Jun
    Computer Science. 2022, No.11:76-82
    Abstract (1556) PDF (2713KB) (12201)
    Research and Progress on Bug Report-oriented Bug Localization Techniques (1192)
    NI Zhen, LI Bin, SUN Xiao-bing, LI Bi-xin, ZHU Cheng
    Computer Science. 2022, No.11:8-23
    Abstract (1192) PDF (2280KB) (11611)
    Patch Validation Approach Combining Doc2Vec and BERT Embedding Technologies (767)
    HUANG Ying, JIANG Shu-juan, JIANG Ting-ting
    Computer Science. 2022, No.11:83-89
    Abstract (767) PDF (2492KB) (11582)
    Semantic Restoration and Automatic Transplant for ROP Exploit Script (811)
    SHI Rui-heng, ZHU Yun-cong, ZHAO Yi-ru, ZHAO Lei
    Computer Science. 2022, No.11:49-54
    Abstract (811) PDF (2661KB) (11265)
    Decision Tree Algorithm-based API Misuse Detection (1194)
    LI Kang-le, REN Zhi-lei, ZHOU Zhi-de, JIANG He
    Computer Science. 2022, No.11:30-38
    Abstract (1194) PDF (3144KB) (11220)
    Study on Effectiveness of Quality Objectives and Non-quality Objectives for Automated Software Refactoring (763)
    GUO Ya-lin, LI Xiao-chen, REN Zhi-lei, JIANG He
    Computer Science. 2022, No.11:55-64
    Abstract (763) PDF (3409KB) (11201)
    AutoUnit:Automatic Test Generation Based on Active Learning and Prediction Guidance (1331)
    ZHANG Da-lin, ZHANG Zhe-wei, WANG Nan, LIU Ji-qiang
    Computer Science. 2022, No.11:39-48
    Abstract (1331) PDF (2609KB) (11094)
    Study on Integration Test Order Generation Algorithm for SOA (1014)
    ZHANG Bing-qing, FEI Qi, WANG Yi-chen, Yang Zhao
    Computer Science. 2022, No.11:24-29
    Abstract (1014) PDF (1866KB) (10514)
    Studies on Community Question Answering-A Survey (585)
    ZHANG Zhong-feng,LI Qiu-dan
    Computer Science. 2010, No.11:19-23
    Abstract (585) PDF (551KB) (10475)
    Research Progress and Challenge of Programming by Examples (948)
    YAN Qian-yu, LI Yi, PENG Xin
    Computer Science. 2022, No.11:1-7
    Abstract (948) PDF (1921KB) (9883)
    Survey of Cloud-edge Collaboration (2419)
    CHEN Yu-ping, LIU Bo, LIN Wei-wei, CHENG Hui-wen
    Computer Science. 2021, No.3:259-268
    Abstract (2419) PDF (1593KB) (9116)
    Survey of Distributed Machine Learning Platforms and Algorithms (2237)
    SHU Na,LIU Bo,LIN Wei-wei,LI Peng-fei
    Computer Science. 2019, No.3:9-18
    Abstract (2237) PDF (1744KB) (8877)
    Survey of Fuzz Testing Technology (1745)
    ZHANG Xiong and LI Zhou-jun
    Computer Science. 2016, No.5:1-8
    Abstract (1745) PDF (833KB) (8141)
    Multisource Information Fusion:Key Issues,Research Progress and New Trends (497)
    CHEN Ke-wen,ZHANG Zu-ping and LONG Jun
    Computer Science. 2013, No.8:6-13
    Abstract (497) PDF (746KB) (8053)
    Physics-informed Neural Networks:Recent Advances and Prospects (6285)
    LI Ye, CHEN Song-can
    Computer Science. 2022, No.4:254-262
    Abstract (6285) PDF (2620KB) (7836)
Announcement
Subject