
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK


-
Application of Large Language Models in Medical Education:Current Situation,Challenges and Future
涂吉, 肖文栋, 涂文记, 李立健. 大语言模型在医学教育中的应用:现状、挑战与未来[J]. 计算机科学, 2025, 52(6A): 240400121-6.
TU Ji, XIAO Wendong, TU Wenji, LI Lijian. Application of Large Language Models in Medical Education:Current Situation,Challenges and Future[J]. Computer Science, 2025, 52(6A): 240400121-6. - TU Ji, XIAO Wendong, TU Wenji, LI Lijian
- Computer Science. 2025, 52 (6A): 240400121-6. doi:10.11896/jsjkx.240400121
-
Abstract
PDF(2899KB) ( 80 )
- References | Related Articles | Metrics
-
Digitization of medical education is an inevitable trend in the development of medical education.By introducing the large language model of medical education,the limitation of traditional medical education can be broken.Students’ learning interest and participation can be improved as well as personalized practice of medical education.Individualized clinical practice teaching and scientific research training can be strengthened,which can improve teaching efficiency and effect.This paper reviews the development of large language model technology and the technical progress of medical large language model.It also lists the application scenarios of large language model in medical education and points out seven challenges of large language model in medical education.It is announced that the future development of medical education large language model is to develop an autonomous and controllable collaborative medical education large language model by using the technology route driven by knowledge and data.
-
Application of Large Language Models in Recommendation System
李博, 莫先. 大语言模型在推荐系统中的应用[J]. 计算机科学, 2025, 52(6A): 240400097-7.
LI Bo, MO Xian. Application of Large Language Models in Recommendation System[J]. Computer Science, 2025, 52(6A): 240400097-7. - LI Bo, MO Xian
- Computer Science. 2025, 52 (6A): 240400097-7. doi:10.11896/jsjkx.240400097
-
Abstract
PDF(1828KB) ( 43 )
- References | Related Articles | Metrics
-
Large language models(LLMs) play a key role in recommendation system(RS),such as feature engineering and feature encoding,pre-training and fine-tuning,and prompt learning.Through feature engineering and feature encoding,LLMs improve the personalization and accuracy of the recommendation system,and optimizes the generalization ability and adaptability of the model.Studies show that LLMs can enrich user profiles and extract item features in the feature engineering stage.The pre-training and fine-tuning phases involve training on a large amount of unlabeled data to prepare for downstream task deployment.The prompt learning phase improves the model’s ability to understand and solve recommendation tasks by designing effective instructions and prompts.This paper also discusses the challenges of LLMs in the application of recommendation systems,such as high computational cost,API dependency,and data noise.Researchers are exploring various optimization strategies.The potential development of recommendation system in the future focuses on data enhancement,fine-tuning efficiency improvement,prompt design optimization,and interpretability enhancement.These comprehensive analyses provide a solid theoretical foundation for the continuous development and innovation in the field of recommendation system.
-
Study on Efficiency of Large Model in Recognizing Rumors from Different Sources
何静, 陈逸然. 大模型识别谣言不同来源效能研究[J]. 计算机科学, 2025, 52(6A): 240700131-5.
HE Jing, CHEN Yiran. Study on Efficiency of Large Model in Recognizing Rumors from Different Sources[J]. Computer Science, 2025, 52(6A): 240700131-5. - HE Jing, CHEN Yiran
- Computer Science. 2025, 52 (6A): 240700131-5. doi:10.11896/jsjkx.240700131
-
Abstract
PDF(1925KB) ( 59 )
- References | Related Articles | Metrics
-
This study aims to address the new challenges faced by online rumor recognition and explore the effectiveness of large models in recognizing different sources of rumors.Constructing domestic and foreign rumor and AI rumor datasets,and testing the rumor source identification ability of four large models under zero sample settings.Research has found that a single large model has low accuracy in identifying rumors and has a clear tendency towards errors.To improve recognition performance,me-thods such as pre-training,fine-tuning,and ensemble learning are adopted to significantly enhance the performance of the large model.Furthermore,a model collision based ensemble learning method is proposed to improve the effectiveness of rumor source recognition by utilizing multi model feedback.Experimental results show that the ensemble learning framework can integrate the advantages of various models and significantly improve recognition accuracy.This study verifies the potential and improvement direction of large-scale language models in rumor recognition through empirical research,which helps to cope with the current complex online rumor environment and maintain the clarity of cyberspace.
-
Domain UML Model Automatic Construction Based on Fine-tuning Qwen2
李嘉威, 邓媛丹, 陈波. 基于微调Qwen2自动构建领域UML模型[J]. 计算机科学, 2025, 52(6A): 240900155-4.
LI Jiawei , DENG Yuandan, CHEN Bo. Domain UML Model Automatic Construction Based on Fine-tuning Qwen2[J]. Computer Science, 2025, 52(6A): 240900155-4. - LI Jiawei , DENG Yuandan, CHEN Bo
- Computer Science. 2025, 52 (6A): 240900155-4. doi:10.11896/jsjkx.240900155
-
Abstract
PDF(2769KB) ( 45 )
- References | Related Articles | Metrics
-
This paper proposes a domain UML(unified modeling language) automatic construction system based on large model fine-tuning technology,which is used to automatically convert natural language descriptions of software system production requirements in various domains into UML class diagrams that comply with the unified modeling language standards. The research process includes the construction of natural text datasets,model fine-tuning,quantitative deployment,and the development of front-end interactive interfaces. By this system,non-professional users can automatically generate UML class diagrams that comply with the unified modeling language standards through simple natural language input,greatly reducing time and labor costs.
-
Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model
邹睿, 杨鉴, 张凯. 基于音素大语言模型及扩散模型的低资源越南语语音合成[J]. 计算机科学, 2025, 52(6A): 240700138-6.
ZOU Rui, YANG Jian, ZHANG Kai. Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model[J]. Computer Science, 2025, 52(6A): 240700138-6. - ZOU Rui, YANG Jian, ZHANG Kai
- Computer Science. 2025, 52 (6A): 240700138-6. doi:10.11896/jsjkx.240700138
-
Abstract
PDF(3814KB) ( 54 )
- References | Related Articles | Metrics
-
With the development of deep learning technology and the progression of speech synthesis research,synthetic speech in widely spoken and high-resource languages such as Chinese and English has increasingly approached natural speech.Vietnamese,a tonal language closely related to Chinese,belongs to the Vietic branch of the Austroasiatic language family of South Asian languages.Due to the scale of available corpus data and the depth of related research,Vietnamese speech synthesis is still significantly short of natural speech.At the premise of low resources,two methods are proposed to improve the naturalness of Vietnamese speech synthesis:1)The phoneme encoder is constructed based on pre-trained phoneme large language model XPhoneBERT,which significantly improves the prosodic expressiveness of Vietnamese speech synthesis with limited data set.2)Improve the U-Net structure in the lightweight diffusion TTS model LightGrad,add nested jump paths,so that the model can be fully trained under low resource conditions,capture more effective information,improve the accuracy of noise prediction,and thus improve the quality of speech synthesis.Experiment results show that the objective and subjective evaluation performance of the Vietnamese speech synthesis system has been significantly improved by using the proposed method.MCD and MOS are up to 6.25 and 4.22 respectively,which are significantly decreased and increased respectively,compared with 7.44 and 3.56 of the baseline system.
-
Intelligent Prediction of Network Traffic Based on Large Language Model
周磊, 石怀峰, 杨恺, 王睿, 刘超凡. 基于大语言模型的网络流量智能预测[J]. 计算机科学, 2025, 52(6A): 241100058-7.
ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan. Intelligent Prediction of Network Traffic Based on Large Language Model[J]. Computer Science, 2025, 52(6A): 241100058-7. - ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan
- Computer Science. 2025, 52 (6A): 241100058-7. doi:10.11896/jsjkx.241100058
-
Abstract
PDF(4369KB) ( 57 )
- References | Related Articles | Metrics
-
With the exponential growth in the number of 5G base stations and the surge in connected terminals,the scale of network traffic is expected to grow exponentially,exhibiting significant nonlinear,multimodal,and bursty characteristics,posing new challenges to network resource allocation and optimization..To address these challenges,this paper proposed a network traffic prediction method based on large language models(NT-LLM).This approach leverages reprogramming techniques to transform traditional network traffic data into a format suitable for LLMs,thus fully utilizing their advantages in cross-task reasoning and complex pattern recognition.With only a small amount of training data and a short training period,NT-LLM can efficiently handle complex network traffic patterns at different time scales.Experimental results demonstrate that compared to baseline models such as LSTM,Informer,and Transformer,the NT-LLM model significantly reduces the mean squared error of network traffic predictions across multiple regions by 44.26%,56.78%,and 51.36%,respectively.Furthermore,this method does not require extensive fine-tuning of pre-trained language models,showcasing strong scalability and adaptability.It maintains high prediction accuracy while reducing computational resource consumption.
-
Study on Open-domain Question Answering Methods Based on Retrieval-augmented Generation
白云天, 郝文宁, 靳大尉. 基于检索增强生成的开放域问答方法研究[J]. 计算机科学, 2025, 52(6A): 240800141-7.
BAI Yuntian, HAO Wenning, JIN Dawei. Study on Open-domain Question Answering Methods Based on Retrieval-augmented Generation[J]. Computer Science, 2025, 52(6A): 240800141-7. - BAI Yuntian, HAO Wenning, JIN Dawei
- Computer Science. 2025, 52 (6A): 240800141-7. doi:10.11896/jsjkx.240800141
-
Abstract
PDF(2792KB) ( 50 )
- References | Related Articles | Metrics
-
Large language models have made significant progress in natural language processing tasks,but their reliance on knowledge encapsulated within parameters can easily lead to the phenomenon of hallucinations.To mitigate this issue,retrieval-augmented generation techniques reduce the risk of errors through information retrieval methods.However,existing methods often retrieve documents that contain inaccurate or misleading information,and there is a lack of discriminative accuracy in evaluating document relevance.In response to these challenges,this study designs a concise and efficient method that combines sparse retrieval with dense retrieval,taking into account both lexical overlap and semantic relevance.Furthermore,a ranker is introduced to reorder the retrieved candidate paragraphs,with the input to the ranker infused with scores from both sparse and dense retri-eval,further optimizing the quality of paragraph ranking.To validate the effectiveness of this method,experiments were conducted on the SQuAD and HotpotQA datasets,and comparisons were made with existing benchmark methods.The experimental results demonstrate that this method holds a significant advantage in enhancing question-answering performance.
-
Hallucinations Proactive Relief in Diabetes Q&A LLM
张乐, 车超, 梁艳. 幻觉主动缓解的糖尿病问诊大模型[J]. 计算机科学, 2025, 52(6A): 240700182-10.
ZHANG Le, CHE Chao, LIANG Yan. Hallucinations Proactive Relief in Diabetes Q&A LLM[J]. Computer Science, 2025, 52(6A): 240700182-10. - ZHANG Le, CHE Chao, LIANG Yan
- Computer Science. 2025, 52 (6A): 240700182-10. doi:10.11896/jsjkx.240700182
-
Abstract
PDF(3151KB) ( 46 )
- References | Related Articles | Metrics
-
The treatment of diabetes is a long-term and highly personalized endeavor and imposes a significant burden on patients’ daily lives.Diabetes consultation through medical large language models(LLMs) can effectively alleviate the medical healthcare burden of patients.But LLMs are more likely to produce hallucinations,i.e.,outputs that are incorrect,meaningless,or mismatched with the input,when processing texts in specialized domains such as medicine.And the accuracy rate of existing hallucination relief techniques in the medical field is not satisfactory,which will greatly affect the accuracy rate of the LLMs.To address this problem,this paper proposes a hallucination self-inspection and proactive relief method that combines instruction fine-tuning and retrieval augmented generation to form additional knowledge about user questions before the generation process,and to determine whether a hallucination is generated by similarity comparison after the generation process.Experiments are conducted on several medical datasets,and an F1 value of 0.79,a BLEU-4 value of 2.38,and a Rouge-l value of 9.26 are achieved on the large-scale diabetic multi-round conversation dataset,which outperforms the existing hallucination relief techniques for LLMs in terms of accuracy and generation efficiency.
-
Research on Semantic Fusion of Chinese Polysemous Words Based on Large LanguageModel
尹宝生, 宗辰. 基于大语言模型的中文多义词义项融合技术研究[J]. 计算机科学, 2025, 52(6A): 240400139-7.
YIN Baosheng, ZONG Chen. Research on Semantic Fusion of Chinese Polysemous Words Based on Large LanguageModel[J]. Computer Science, 2025, 52(6A): 240400139-7. - YIN Baosheng, ZONG Chen
- Computer Science. 2025, 52 (6A): 240400139-7. doi:10.11896/jsjkx.240400139
-
Abstract
PDF(2113KB) ( 38 )
- References | Related Articles | Metrics
-
Aiming at the polysemy characteristics of Chinese words,it is of great significance to construct a comprehensive and standardized Chinese polysemy knowledge base based on existing Chinese dictionary resources for Chinese semantic analysis,intelligent question answering,machine translation,and the optimization and evaluation of the disambiguation ability of large language models.This paper makes an in-depth analysis of the problems such as “the same meanings but different descriptions” in the integration of Modern Chinese Dictionary and Modern Chinese Standard Dictionary and other resources.Furthermore,it innovatively proposes the polysemical meaning fusion technology based on large language model and prompt learning,which fully uses the large language model’s ability to analyze and understand common sense knowledge and assist decision-making.The automatic fusion of polysemous terms is accomplished by means of effective problem decomposition strategy,prompt template design and cross-validation of semantic relation.Experimental results show that on the evaluation data of 50 polysemous words with a total of 754 sense pairs extracted by normal distribution,the accuracy of sense fusion based on the above algorithm is 96.26%,and the Dice coefficient is 0.973 3.This study verifies the feasibility and effectiveness of using the large language model to carry out automatic processing of Chinese knowledge resources.Compared with the traditional processing mode relying on language experts,it significantly improves the efficiency of knowledge processing on the premise of ensuring higher quality.
-
Study on Named Entity Recognition Algorithms in Audit Domain Based on Large LanguageModels
户才顺. 基于大语言模型的审计领域命名实体识别算法研究[J]. 计算机科学, 2025, 52(6A): 240700190-4.
HU Caishun. Study on Named Entity Recognition Algorithms in Audit Domain Based on Large LanguageModels[J]. Computer Science, 2025, 52(6A): 240700190-4. - HU Caishun
- Computer Science. 2025, 52 (6A): 240700190-4. doi:10.11896/jsjkx.240700190
-
Abstract
PDF(2146KB) ( 46 )
- References | Related Articles | Metrics
-
With the emergence of ChatGPT,large language models have begun to play a significant role across various industries,from general fields to specialized domains.Although there have been methods combining artificial intelligence with auditing,the application of large language models in auditing still needs further research due to the fact that the accuracy of traditional artificial intelligence methods is much lower than that of existing large language models.The use of AI methods to intelligently identify useful entities within text in auditing can greatly enhance work efficiency and reduce errors.Conventional auditing text entity recog-nition algorithms primarily rely on machine learning combined with feature engineering,which generally results in lower accuracy.In light of this,this study investigates the applications of several common open-source models(such as Llama) and closed-source models(such as ChatGPT) in auditing text entity recognition,while integrating contextual learning techniques to improve model recognition performance.The results demonstrate that by employing a sample organization method based on similarity selection,the accuracy of entity recognition can be improved to 98.3%,achieving notable improvements.
-
Large Model Driven AI Application Service Platform
梁秉豪, 张传刚, 袁明明. 大模型驱动的AI应用服务平台[J]. 计算机科学, 2025, 52(6A): 240900022-4.
LIANG Binghao, ZHANG Chuangang, YUAN Mingming. Large Model Driven AI Application Service Platform[J]. Computer Science, 2025, 52(6A): 240900022-4. - LIANG Binghao, ZHANG Chuangang, YUAN Mingming
- Computer Science. 2025, 52 (6A): 240900022-4. doi:10.11896/jsjkx.240900022
-
Abstract
PDF(2311KB) ( 40 )
- References | Related Articles | Metrics
-
With the continuous advancement of the transformation of enterprise data intelligence,artificial intelligence technology has begun to be applied to various aspects of enterprise internal management,operation analysis and production efficiency improvement.However,the traditional AI application research and development process involves data acquisition,data cleaning,feature extraction,algorithm modeling,and application research and development.The overall technical threshold is high,the collaboration of team members is difficult,the utilization rate of hardware resources is low,and it is difficult to support the agile landing of digital intelligent business requirements.To solve these problems,a set of AI application service platform based on pre-trained large model is proposed.The platform is mainly designed for AI application research and development and operation management,which greatly reduces the difficulty of team collaboration and asset management.For the core processes in the preparation state,design state and running state,the pre-trained large model and low-code technology are introduced.By constructing the labeled large model,the test large model and the operation large model,the research and development efficiency of AI application is improved.Meanwhile,the real-time analysis of operational data is realized,the user experience is guaranteed and the utilization rate of hardware resources is greatly improved.
-
Position-aware Based Multi-modality Lung Cancer Survival Prediction Method
王毅诚, 宁泰, 刘心宇, 罗烨. 基于位置感知的多模态肺癌生存预测方法[J]. 计算机科学, 2025, 52(6A): 240500089-8.
WANG Yicheng, NING Tai, LIU Xinyu, LUO Ye. Position-aware Based Multi-modality Lung Cancer Survival Prediction Method[J]. Computer Science, 2025, 52(6A): 240500089-8. - WANG Yicheng, NING Tai, LIU Xinyu, LUO Ye
- Computer Science. 2025, 52 (6A): 240500089-8. doi:10.11896/jsjkx.240500089
-
Abstract
PDF(4390KB) ( 40 )
- References | Related Articles | Metrics
-
Whole slide images(WSIs) of lung cancer play a pivotal role in prognostic diagnosis.However,survival analysis for lung cancer without pixel-level annotations still encounters numerous challenges.Existing methods often overlook information from clinical feature modalities,spatial information of patches,and the heterogeneity between WSIs and natural images.To address these hurdles,a position-aware based multi-modality lung cancer survival prediction method PSMMSurv is proposed.This approach effectively leverages whole slide images and clinical features through multi-modality fusion and multi-task learning.Furthermore,the proposed whole slide image feature learning network achieves position awareness by interacting with information from adjacent locations.Moreover,data heterogeneity issues are overcome through self-supervised learning.Experimental results on a large lung cancer dataset demonstrate that the proposed method surpasses existing approaches in terms of the C-index metric,enabling more accurate prediction of lung cancer patients’ survival outcomes and providing reliable support for better lung cancer prognosis.
-
Study on Segmentation Algorithm of Lower Limb Bone Anatomical Structure Based on 3D CTImages
石辛诚, 王宝会, 于利韬, 杜辉. 基于三维CT片的下肢骨解剖结构分割算法的研究[J]. 计算机科学, 2025, 52(6A): 240500119-9.
SHI Xincheng, WANG Baohui, YU Litao, DU Hui. Study on Segmentation Algorithm of Lower Limb Bone Anatomical Structure Based on 3D CTImages[J]. Computer Science, 2025, 52(6A): 240500119-9. - SHI Xincheng, WANG Baohui, YU Litao, DU Hui
- Computer Science. 2025, 52 (6A): 240500119-9. doi:10.11896/jsjkx.240500119
-
Abstract
PDF(5101KB) ( 34 )
- References | Related Articles | Metrics
-
There are higher demands for the performance and effectiveness of segmentation algorithms in the domain of medical image segmentation,due to disturbances suchas noise,artifacts,and low contrast in lower limb bone CT images.In response to this demand,a tailored improvement of the image segmentation model based on the U-Net convolutional neural network model and the characteristics of three-dimensional CT image input data is proposed,improving the accuracy of segmentation.The proposed model,which is based on the U-Net module,is employing multiple layers of convolutional pooling aggregation,combined with attention mechanisms and feature fusion between consecutive slices.This approach can fully explore the features and structural information in the image,achieving an end-to-end image segmentation method.The paper validates the model using a dataset of lower limb bone CT images from Xishan Hospital.Experimental results demonstrate that the average intersection over union(IoU) of the proposed model reaches 84.959%,while the corresponding value of other models is 78.604%(U-Net),80.481%(Nested U-Net),and 79.877%(Attention U-Net),respectively.The proposed model shows significant improvements compared to other models.
-
Bi-MI ViT:Bi-directional Multi-level Interaction Vision Transformer for Lung CT ImageClassification
龙肖, 黄巍, 胡凯. 基于双向多层级交互网络的肺部CT图像分类[J]. 计算机科学, 2025, 52(6A): 240700183-6.
LONG Xiao, HUANG Wei, HU Kai. Bi-MI ViT:Bi-directional Multi-level Interaction Vision Transformer for Lung CT ImageClassification[J]. Computer Science, 2025, 52(6A): 240700183-6. - LONG Xiao, HUANG Wei, HU Kai
- Computer Science. 2025, 52 (6A): 240700183-6. doi:10.11896/jsjkx.240700183
-
Abstract
PDF(3377KB) ( 56 )
- References | Related Articles | Metrics
-
In recent years,the local-window based Self-Attention mechanism has gained prominence in vision tasks.However,due to the limited receptive field and weak modeling ability,it is not effective in dealing with complex data.The features in lung CT images are complex and diverse,including the shape,size and density of nodules,which bring challenges to mining the deep features in the data.To address these issues,this paper proposes a bi-directional multi-level interaction vision Transformer(Bi-MI ViT) backbone network that effectively integrates spatial and channel information through an innovative bi-directional multi-level interaction mechanism.This integration significantly improves the accuracy and comprehensiveness of feature extraction.Within the Transformer branch,we introduce an efficient cascaded group attention(CGA) strategy to enrich the diversity of attention head features and enhance the model’s ability to capture key information.Simultaneously,in the convolutional neural network(CNN) branch,we utilize a depth-wise and point-wise(DP) block structure along with point-wise convolution(PW) and depth-wise convolution(DW) to deeply mine local information and optimize model representation ability.Additionally,our establishment of a deep feature extraction(DFE) module enhances feature propagation and reuse while optimizing data utilization efficiency,leading to substantial performance improvement.Experimental results on both of the public COVID-CT dataset and private LUAD-CT dataset demonstrate that the proposed method outperforms the eight comparison methods in classification accuracy.
-
Tumor Mutation Prediction Model of Lung Adenocarcinoma Based on Pathological
关昕, 杨雪永, 杨啸林, 孟祥福. 基于病理组织切片的肺腺癌肿瘤突变预测模型[J]. 计算机科学, 2025, 52(6A): 240700010-8.
GUAN Xin, YANG Xueyong, YANG Xiaolin, MENG Xiangfu. Tumor Mutation Prediction Model of Lung Adenocarcinoma Based on Pathological[J]. Computer Science, 2025, 52(6A): 240700010-8. - GUAN Xin, YANG Xueyong, YANG Xiaolin, MENG Xiangfu
- Computer Science. 2025, 52 (6A): 240700010-8. doi:10.11896/jsjkx.240700010
-
Abstract
PDF(3179KB) ( 35 )
- References | Related Articles | Metrics
-
Tumor mutational burden(TMB) is positively correlated with the immunotherapy efficacy of non-small cell lung cancer(NSCLC).In clinical practice,tumor mutational burden is generally measured through whole exome sequencing(WES).How-ever,whole exome sequencing is complex,time-consuming,and expensive,making it inaccessible for most hospitals.In light of this,a low-cost,short-cycle,and high-accuracy deep learning model called DBFormer has been proposed to predict the tumor muta-tional burden of lung adenocarcinoma based on pathological tissue slices.Firstly,the color deconvolution structure combines the RGB and HED image information of the digital pathological images input into the model,enriching the information in the pathological images and making the model more suitable for medical task classification.Secondly,the images are processed through a four-layer pyramid structure,each layer consisting of a max-pooling la-yer and a DBFormer block.The max-pooling layer reduces the image size and increases the feature matrix dimensions,while the DBFormer block includes normalization layers and dual-route attention mechanisms for feature extraction and processing.Finally,337 and 200 lung cancer tissue pathological images are randomly selected from the TCGA-LUAD public dataset to construct binary and ternary classification datasets for experimentation.On the binary classification dataset,the DBFormer model achieves AUC,F1-Score,Precision,and Recall of 99.7%,97.3%,97.6%,and 97.2%,respectively.On the ternary classification dataset,DBFormer achieves an Accuracy,Precision,Recall,and F1-Score of 97.3%,97.0%,97.0%,and 97.1%,respectively.Experimental results demonstrate that the DBFormer model outperforms classical deep learning models in predicting the tumor mutational burden of lung adenocarcinoma based on di-gital pathological images.
-
CT Image Segmentation of Intracranial Hemorrhage Based on ESC-TransUNet Network
谭佳慧, 文琛言, 黄巍, 胡凯. 基于ESC-TransUNet网络的脑出血CT图像分割[J]. 计算机科学, 2025, 52(6A): 240700030-9.
TAN Jiahui, WEN Chenyan, HUANG Wei, HU Kai. CT Image Segmentation of Intracranial Hemorrhage Based on ESC-TransUNet Network[J]. Computer Science, 2025, 52(6A): 240700030-9. - TAN Jiahui, WEN Chenyan, HUANG Wei, HU Kai
- Computer Science. 2025, 52 (6A): 240700030-9. doi:10.11896/jsjkx.240700030
-
Abstract
PDF(3943KB) ( 43 )
- References | Related Articles | Metrics
-
In view of the challenges encountered in CT image processing of intracranial hemorrhage,such as the variability of the spatial position,shape and size of the hemorrhage region and the difficulty in determining the boundary due to the similar intensity value of the surrounding tissue,an improved TransUNet image segmentation model(ESC-TransUNet) is proposed.Firstly,an explicit visual center(EVC) is added to the model before upsampling,which can capture the correlation degree of far-distance pixels in the image and retain the detailed information of local corner regions in the input image,which is helpful to effectively extract the features of the bleeding region.Secondly,a shuffle attention(SA) mechanism is introduced in the encoder stage,which effectively learns the small differences between the bleeding area and the background,thus improving the accuracy of the segmentation task.Finally,CBM2 structure is used in the decoder stage to promote more effective information transmission and enhance the generalization ability and accuracy of the model.Numerous experiments have been conducted on Physionet(PHY),a publicly available dataset on intracranial hemorrhage.The results show that the proposed method outperforms the other nine main segmentation methods and achieves better performance in the task of intracranial hemorrhage CT image segmentation.
-
LST-ARBunet:An Improved Deep Learning Algorithm for Nodule Segmentation in Lung CT Images
陈祥龙, 李海军. LST-ARBunet:一种改进的用于肺部CT图像结节检测和分割的深度学习算法[J]. 计算机科学, 2025, 52(6A): 240600020-10.
CHEN Xianglong, LI Haijun. LST-ARBunet:An Improved Deep Learning Algorithm for Nodule Segmentation in Lung CT Images[J]. Computer Science, 2025, 52(6A): 240600020-10. - CHEN Xianglong, LI Haijun
- Computer Science. 2025, 52 (6A): 240600020-10. doi:10.11896/jsjkx.240600020
-
Abstract
PDF(5885KB) ( 35 )
- References | Related Articles | Metrics
-
In this paper,a novel deep learning model,LST-ARBunet,is proposed to solve the problem of accurate segmentation of lung nodules in lung computed tomography(CT) images.In the field of lung nodule detection,it is difficult to realize the technology due to factors such as tiny nodule size,diverse morphology and high similarity with surrounding tissues.The main innovations of the LST-ARBunet model are the incorporation of the Swin-Transformer structure in the downsampling process to capture the features of the lung images in different scales;the Swin-Transformer structure is subjected to a local convolutional fronts and shared parameter processing to reduce the number of model parameters;incorporating a customized attention mechanism in the upsampling process to capture important detailed features;and using inverted residual blocks instead of normal convolution to lighten the model.Experimental validation on the publicly available lung nodule CT dataset LIDC-IDRI,LST-ARBunet demonstrates some performance improvement,with an intersection over union(IoU) of 0.889 and average symmetric surface distance(ASSD) of 1.453,and Dice similarity score(Dice Score) of 0.884,all of which outperform the models of the ablation experiments as well as the ResUnet,PSPNet,and DeepLabv3+ models.In addition,LST-ARBunet maintains a high segmentation accuracy while maintaining a relatively reasonable inference time of 1.3s,providing a feasible balance of efficiency for clinical applications.This study provides a new technical approach to lung nodule segmentation,and future work will explore the model’s performance on more diverse clinical datasets,further optimize the model efficiency,and advance its deployment and application in real-world healthcare environments to provide strong support for the early detection and treatment of lung cancer.
-
Heart Sound Classification Algorithm Using Enhanced Image Coding and AsymmetricConvolutional Networks
王晟懿, 杨宏波, 潘家华, 王威廉. 一种通过增强图像编码和非对称卷积网络的心音分类算法[J]. 计算机科学, 2025, 52(6A): 240700195-8.
WANG Shengyi, YANG Hongbo, PAN Jiahua, WANG Weilian. Heart Sound Classification Algorithm Using Enhanced Image Coding and AsymmetricConvolutional Networks[J]. Computer Science, 2025, 52(6A): 240700195-8. - WANG Shengyi, YANG Hongbo, PAN Jiahua, WANG Weilian
- Computer Science. 2025, 52 (6A): 240700195-8. doi:10.11896/jsjkx.240700195
-
Abstract
PDF(2898KB) ( 36 )
- References | Related Articles | Metrics
-
This paper proposes a heart sound classification algorithm using enhanced image coding and asymmetric convolutional networks.Unlike traditional methods that extract heart sounds based on statistical features and time-frequency domain features,this algorithm enhances three image coding methods—Gramian angular field(GAF),Markov transition field(MTF),and recurrence plot(RP)—by introducing fractional Fourier transform(FrFT),which constitutes the image coding modules of FrFT-GAF,FrFT-MTF,and FrFT-RP,respectively.The one-dimensional heart sound signal is transformed into a two-dimensional encoded feature map using these image coding modules.An asymmetric convolutional network(ACNet) leverages computer vision advantages to analyze and process the two-dimensional encoded feature map for effective heart sound classification.In addition,the performance of the above image coding modules is evaluated and compared respectively.Experimental results demonstrate that the FrFT-RP module achieves the best classification performance in binary heart sound classification tasks,with an accuracy of 0.981 and 0.977,and F1 score of 0.989 and 0.974 on dataset 1 and dataset 2(Physio Net/CinC 2016 dataset),respectively.The FrFT-MTF and FrFT-GAF modules show effective performance in that order.The performance of the method using FrFT to enhance image encoding features has significantly improved compared to previous methods,providing novel approaches and methods for heart sound signal classification,is expected to be applied in machine assisted diagnosis of congenital heart disease.
-
Research on Electrocardiogram Classification and Recognition Algorithm Based on Transfer Learning
陈麒瑞, 王宝会, 戴辰程. 基于迁移学习的心电图分类识别算法的研究[J]. 计算机科学, 2025, 52(6A): 240900073-8.
CHEN Qirui, WANG Baohui, DAI Chencheng. Research on Electrocardiogram Classification and Recognition Algorithm Based on Transfer Learning[J]. Computer Science, 2025, 52(6A): 240900073-8. - CHEN Qirui, WANG Baohui, DAI Chencheng
- Computer Science. 2025, 52 (6A): 240900073-8. doi:10.11896/jsjkx.240900073
-
Abstract
PDF(2452KB) ( 54 )
- References | Related Articles | Metrics
-
As the pace of urban life continues to accelerate,more and more people are troubled by cardiovascular diseases.Electrocardiogram is a key means of diagnosing heart disease,but in the faced of the growing number of patients,limited medical resources cannot meet the huge demand for electrocardiogram interpretation.Therefore,how to use computers to automatically classify and identify electrocardiograms has become an urgent need.This study is based on the clinical data set provided by Anzhen hospital.According to statistics,there are problems in the data set such as small total data,uneven data distribution,and some data are not labeled.Based on this,this paper uses a semi-supervised learning method to label the unlabeled data,and the algorithm labeling accuracy reaches 91.4%.Secondly,this paper uses transfer learning to train the model.The MMD value of the source data set and the target data set used in this paper is 1.99,and the distribution of the two has a high similarity.Compared with other training methods,this algorithm can achieve better learning results on data sets with a small total amount of data and uneven data distribution; on the actual outpatient data set,our method makes model’s accuracy reach 0.973,recall rate reaches 0.866,and F1 value reaches 0.932.Compared with not using transfer learning,accuracy is improved by 0.423,recall rate is improved by 0.274,and F1 value is improved by 0.384.This results show that the algorithm has good generalization ability and adaptability,and can provide strong support for clinical practice.
-
Function Prediction of Therapeutic Peptides with Multi-coded Neural Networks Based on Projected Gradient Descent
冉琴, 阮小利, 徐婧, 李少波, 胡丙齐. 基于投影梯度下降的多编码神经网络治疗肽功能预测研究[J]. 计算机科学, 2025, 52(6A): 240800024-6.
RAN Qin, RUAN Xiaoli, XU Jing, LI Shaobo, HU Bingqi. Function Prediction of Therapeutic Peptides with Multi-coded Neural Networks Based on Projected Gradient Descent[J]. Computer Science, 2025, 52(6A): 240800024-6. - RAN Qin, RUAN Xiaoli, XU Jing, LI Shaobo, HU Bingqi
- Computer Science. 2025, 52 (6A): 240800024-6. doi:10.11896/jsjkx.240800024
-
Abstract
PDF(2715KB) ( 58 )
- References | Related Articles | Metrics
-
Therapeutic peptides are widely used in disease treatment due to their minimal toxicity,high absorption rate and high biological activity as an effective alternative to traditional antibiotic drugs in the field of biomedicine.While there has been limited consideration given to predicting multi-functions of therapeutic peptides in the perspective of deep learning until now.Therefore,a neural network prediction model with projected gradient descent(PGD),called PrMFTP-PGD,is proposed based on publicly available multi-functional therapeutic peptide(MFTP) datasets.The approach involves three steps.First,a multi-encoder is incorporated with a multi-head attention mechanism to extract the features of the input vectors and obtain a better representation capability.Then,a linear attention mechanism is introduced to further enhance the representation and extraction of features.Finally,adversarial training with PGD is used to mitigate the challenges posed by the inherent class imbalance problem in the MFTP datasets for the prediction task.The proposed method is compared with the existing methods,MPMAB,MLBP,PrMFTP and SP-RNN,on an independent test set.It demonstrates the biggest improvements across four key metrics-precision(2.55%),coverage(2.81%),accuracy(2.59%),and absolute correctness(2.39%),indicating that this method can enhance the model’s ability to capture sequence features,so as to better predict multifunctional therapeutic peptides.
-
Study on Diagnosis Model of Livestock and Poultry Disease Based on Improved TF-IIGM Algorithm
郭晓利, 李奇峰, 刘羽, 张俊, 赵红涛, 杨淦, 蒋瑞祥, 余礼根. 基于改进TF-IIGM算法的畜禽疫病诊断模型研究[J]. 计算机科学, 2025, 52(6A): 240700029-7.
GUO Xiaoli, LI Qifeng, LIU Yu, ZHANG Jun, ZHAO Hongtao, YANG Gan, JIANG Ruixiang, YU Ligen. Study on Diagnosis Model of Livestock and Poultry Disease Based on Improved TF-IIGM Algorithm[J]. Computer Science, 2025, 52(6A): 240700029-7. - GUO Xiaoli, LI Qifeng, LIU Yu, ZHANG Jun, ZHAO Hongtao, YANG Gan, JIANG Ruixiang, YU Ligen
- Computer Science. 2025, 52 (6A): 240700029-7. doi:10.11896/jsjkx.240700029
-
Abstract
PDF(1944KB) ( 46 )
- References | Related Articles | Metrics
-
In order to deal with the problem of low diagnostic accuracy caused by inaccurate weight allocation of feature items in livestock and poultry diseases texts,the improved TF-IIGM-GW algorithm combined with Word2vec word vector is used to rea-lize the text vectorization.On the basis of the TF-IIGM weighting method,the method is normalized and combined with the rule based on the keyword extraction algorithm to further improve the weight of core keywords in the texts.Finally,the text vectorization results obtained by combining the weight with Word2vec word vector are inputted into the support vector machine(SVM) for diagnosis of livestock and poultry diseases.In order to verify the effectiveness of the improved algorithm,based on the self-built text datasets of livestock and poultry diseases,the improved algorithm is compared with the commonly used methods of word vector.Results show that the macro-F1 value and micro-F1 value based on the TF-IIGM-GW algorithm are 96.73% and 96.76%,respectively,which are 2.25% and 2.26% higher than those of the commonly used algorithm TF-IDF,and 0.90% and 0.97% higher than those of TF-IIGM weighting method.The improved algorithm could effectively improve the performance of disease diagnosis.The analysis of the experimental results of SVM on each type of diseases shows that sheep oral aphthae is most easily misjudged.
-
Review of Path Planning Algorithms for Mobile Robots
刘清云, 游雄, 张欣, 左吉伟, 李佳. 移动机器人路径规划算法综述[J]. 计算机科学, 2025, 52(6A): 240900074-10.
LIU Qingyun, YOU Xiong, ZHANG Xin, ZUO Jiwei, LI Jia. Review of Path Planning Algorithms for Mobile Robots[J]. Computer Science, 2025, 52(6A): 240900074-10. - LIU Qingyun, YOU Xiong, ZHANG Xin, ZUO Jiwei, LI Jia
- Computer Science. 2025, 52 (6A): 240900074-10. doi:10.11896/jsjkx.240900074
-
Abstract
PDF(3688KB) ( 46 )
- References | Related Articles | Metrics
-
Path planning algorithm is one of the key technologies for mobile robots to achieve autonomous motion.It can help robots to optimize the optimal or suboptimal path in complex environments,enabling them to reach the target position from the starting point.A good path planning algorithm is of great significance for improving the performance,adaptability,and reliability of robots.In order to comprehensively and clearly understand the current research status of path planning algorithms for mobile robots at home and abroad,this paper summarizes and reviews the commonly used path planning algorithms for mobile robots.Based on the principles and characteristics of each algorithm,path planning algorithms are first divided into four categories:traditional algorithms,sampling based algorithms,intelligent bionic algorithms,and artificial intelligence algorithms.Secondly,each type of algorithm is subdivided,and the principles,advantages and disadvantages of each algorithm are introduced in detail,and some scholars’ improvements to the limitations of each algorithm are shown.Finally,the advantages and disadvantages of each algorithm are summarized,compared and analyzed,and the development trends of mobile robot path planning algorithms are summarized,in the hope of providing certain reference for the development of mobile robot path planning.
-
Review on Methods and Applications of Short Text Similarity Measurement in Social Media Platforms
范星, 周晓航, 张宁. 基于社交媒体平台的短文本相似性度量方法及应用综述[J]. 计算机科学, 2025, 52(6A): 240400206-8.
FAN Xing, ZHOU Xiaohang, ZHANG Ning. Review on Methods and Applications of Short Text Similarity Measurement in Social Media Platforms[J]. Computer Science, 2025, 52(6A): 240400206-8. - FAN Xing, ZHOU Xiaohang, ZHANG Ning
- Computer Science. 2025, 52 (6A): 240400206-8. doi:10.11896/jsjkx.240400206
-
Abstract
PDF(1845KB) ( 35 )
- References | Related Articles | Metrics
-
Short text similarity measurement is a fundamental task in the field of natural language processing.With the increase in user activity on social media platforms,short text data is positioned as the primary carrier of internet information dissemination.This data type holds considerable value for businesses in gaining insights into consumer sentiments and in accurately representing user profiles through big data.By systematically categorizing short text similarity measurement methods,these can be divided into three categories:string-based methods,vector-based methods,and deep learning methods.Furthermore,the paper explores the advantages and limitations of these methods.Moreover,this research emphasizes the practical applications of short text similarity in business analytics,demonstrating how short text similarity measurement can enable businesses to derive insights into consumer opinions and attitudes,and to refine marketing strategies.Finally,this study provides a comprehensive summary of the challenges encountered in short text similarity measurement on social media platforms and anticipates future developments,with the aim of offering valuable references and insights for related researchers.
-
Suvery of Artificial Intelligence Ensuring eVTOL Flight Safety in the Context of Low-altitudeEconomy
苏志远, 赵利绪, 郝志恒, 百茹峰. 低空经济背景下人工智能保障eVTOL飞行安全综述[J]. 计算机科学, 2025, 52(6A): 250200050-13.
SU Zhiyuan, ZHAO Lixu, HAO Zhiheng, BAI Rufeng. Suvery of Artificial Intelligence Ensuring eVTOL Flight Safety in the Context of Low-altitudeEconomy[J]. Computer Science, 2025, 52(6A): 250200050-13. - SU Zhiyuan, ZHAO Lixu, HAO Zhiheng, BAI Rufeng
- Computer Science. 2025, 52 (6A): 250200050-13. doi:10.11896/jsjkx.250200050
-
Abstract
PDF(3180KB) ( 52 )
- References | Related Articles | Metrics
-
With the rise of the low-altitude economy,electric vertical take-off and landing(eVTOL) aircraft are seeing increasingly widespread applications.Ensuring their flight safety has become critically important,necessitating strengthened research into relevant safety measures.Through a systematic review of existing literature,this study conducts an in-depth analysis from three dimensions:the operational reliability of eVTOL systems,the safety of operational protocols,and the security of flight data transmission.It identifies key challenges such as data security risks,decision-making under complex environmental conditions,computational resource constraints,and multi-agent coordination complexities.Finally,the paper forecasts future trends in AI-driven eVTOL safety applications and proposes priorities for technological R&D,regulatory framework optimization,and industry collaboration.These recommendations aim to present a comprehensive roadmap for AI-enabled safety solutions in eVTOL operations,serving as a foundational reference for future advancements in this field.
-
Research Progress and Challenges in Forest Fire Risk Prediction
杨继翔, 蒋惠萍, 王森, 马轩. 森林火灾风险预测的研究进展及面临的挑战[J]. 计算机科学, 2025, 52(6A): 240400177-8.
YANG Jixiang, JIANG Huiping, WANG Sen, MA Xuan. Research Progress and Challenges in Forest Fire Risk Prediction[J]. Computer Science, 2025, 52(6A): 240400177-8. - YANG Jixiang, JIANG Huiping, WANG Sen, MA Xuan
- Computer Science. 2025, 52 (6A): 240400177-8. doi:10.11896/jsjkx.240400177
-
Abstract
PDF(1887KB) ( 41 )
- References | Related Articles | Metrics
-
With the intensification of global climate change and human activities,forest fire incidents have become increasingly frequent,leading to severe ecological damage and socioeconomic losses.Forest fire risk prediction,as a primary measure for forest fire management and monitoring,has significant importance.Therefore,this study conducts an in-depth analysis of existing forest fire risk prediction methods.These methods are categorized into three types based on different data sources:models based on geographical environmental factors,models based on remote sensing and geographic information systems(GIS),and models based on remote sensing imagery.The characteristics of each method are thoroughly summarized,and their research approaches,application scopes,and specific requirements for data and algorithms are analyzed.Subsequently,this study introduces several datasets proposed by relevant researchers in the field of forest fire risk prediction and compares the experimental results of the mentioned prediction methods.Finally,the major issues associated with the three types of models are analyzed,and future research directions are proposed.
-
Study on Multi-agent Supply Chain Inventory Management Method Based on Improved Transformer
朴明杰, 张冬冬, 卢鹄, 李汝鹏, 葛小丽. 基于改进Transformer的多智能体供应链库存管理方法[J]. 计算机科学, 2025, 52(6A): 240500054-10.
PIAO Mingjie, ZHANG Dongdong, LU Hu, LI Rupeng, GE Xiaoli. Study on Multi-agent Supply Chain Inventory Management Method Based on Improved Transformer[J]. Computer Science, 2025, 52(6A): 240500054-10. - PIAO Mingjie, ZHANG Dongdong, LU Hu, LI Rupeng, GE Xiaoli
- Computer Science. 2025, 52 (6A): 240500054-10. doi:10.11896/jsjkx.240500054
-
Abstract
PDF(3361KB) ( 33 )
- References | Related Articles | Metrics
-
Effective supply chain inventory management is crucial for large-scale manufacturing industries such as civil aircraft and automotive manufacturing,as it ensures efficient production operations.Typically,the main-manufacturer formulates an annualinventory management plan and contacts suppliers when certain materials approach critical inventory levels based on the actual production schedule.However,changes in actual production conditions may necessitate alterations to the annual inventory ma-nagement plan.Therefore,making procurement decisions based on actual production conditions and inventory is relatively moreflexible and efficient.In recent years,many researchers have focused on using reinforcement learning methods to study inventory management problems.Current methods can achieve a certain degree of efficient management when solving the inventory management problem in the civil aircraft manufacturing supply chain with a multi-node and multi-material model,but with high complexity.To address this issue,we formalize the problem as a partially observable Markov decision process model and propose a multi-agent supply chain inventory management method based on improved transformer.This method transforms the multi-agent reinforcement learning problem into a sequence modeling problem with an encoder-decoder architecture based on the essence of multi-agent reinforcement learning sequence decision-making,logically reducing the complexity of the algorithm.Experimental results show that compared to existing reinforcement learning-based methods,the proposed method has about 90% improvement in complexity while maintaining similar performance.
-
Question Answering System for Soybean Planting Management Based on Knowledge Graph
郑鑫鑫, 陈凡, 孙宝丹, 巩建光, 江俊慧. 基于知识图谱的大豆种植管理知识问答系统[J]. 计算机科学, 2025, 52(6A): 240500025-8.
ZHENG Xinxin, CHEN Fan, SUN Baodan, GONG Jianguang, JIANG Junhui. Question Answering System for Soybean Planting Management Based on Knowledge Graph[J]. Computer Science, 2025, 52(6A): 240500025-8. - ZHENG Xinxin, CHEN Fan, SUN Baodan, GONG Jianguang, JIANG Junhui
- Computer Science. 2025, 52 (6A): 240500025-8. doi:10.11896/jsjkx.240500025
-
Abstract
PDF(3495KB) ( 51 )
- References | Related Articles | Metrics
-
The traditional soybean database has a narrow range of knowledge coverage and complicated invalid information,which makes it impossible for soybean growers to effectively solve production problems in the Internet.Knowledge graphs provide a way to extract knowledge from massive text and image data,enabling users to quickly and effectively retrieve information.Therefore,this paper firstly constructs a soybean planting management knowledge graph based on existing open information,and builds a related question and answer system to help soybean growers solve problems encountered in the planting process.Specifically,the paper uses a top-down knowledge graph construction method to collect existing knowledge and prior knowledge in professional fields,and uses BIO method to label data.Then,it constructs the knowledge graph after extracting entities through Bert-BiLSTM-CRF model.Finally,by using the Bert-BiLSTM-CRF model and the Bert+TextCNN model,it completes the named entity recognition task and the user intent judgment task to build the question and answer system.Experimental results show that the soybean planting management knowledge question and answer system constructed in this paper can effectively answer problems encountered in the planting process,which proves that the question and answer system can be applied in the practice.
-
Commodity Attribute Classification Method Based on Dual Pre-training
赵哲宇, 王中卿, 王红玲. 基于双重预训练的商品属性分类方法[J]. 计算机科学, 2025, 52(6A): 240500127-8.
ZHAO Zheyu, WANG Zhongqing, WANG Hongling. Commodity Attribute Classification Method Based on Dual Pre-training[J]. Computer Science, 2025, 52(6A): 240500127-8. - ZHAO Zheyu, WANG Zhongqing, WANG Hongling
- Computer Science. 2025, 52 (6A): 240500127-8. doi:10.11896/jsjkx.240500127
-
Abstract
PDF(3014KB) ( 45 )
- References | Related Articles | Metrics
-
The commodity attribute classification task refers to the process of analyzing the attributes of a piece of merchandise based on its descriptive text and subsequently categorizing multiple attributes.This process aids in providing insights into merchandise from various perspectives,thereby assisting in marketing and product management.While the utilization of large language models is increasingly prevalent,their performance in commodity attribute classification tasks remains suboptimal due to the lack of domain knowledge and attribute correlations.To address this issue,this paper proposes a dual pre-training-based method for commodity attribute classification,aiming to enhance the performance of large language models in such tasks by employing specific pre-training techniques.Building upon the T5 model,this paper introduces two methods:domain-specific text pre-training and attribute correlation-based pre-training.These methods enhance the model's understanding of the specific task from both input and output text perspectives,facilitating the classification of multiple attributes of merchandise.Experimental results on the Clothing Fit Data dataset demonstrate that the dual pre-trained T5 model outperforms both non-pre-trained models and other baseline models in attribute classification,validating the effectiveness of the proposed approach.
-
Adaptive Hybrid Genetic Algorithm Based on PPO for Solving Traveling Salesman Problem
黄傲, 李敏, 曾祥光, 潘云伟, 张加衡, 彭倍. 基于PPO的自适应杂交遗传算法求解旅行商问题[J]. 计算机科学, 2025, 52(6A): 240600096-6.
HUANG Ao, LI Min, ZENG Xiangguang, PAN Yunwei, ZHANG Jiaheng, PENG Bei. Adaptive Hybrid Genetic Algorithm Based on PPO for Solving Traveling Salesman Problem[J]. Computer Science, 2025, 52(6A): 240600096-6. - HUANG Ao, LI Min, ZENG Xiangguang, PAN Yunwei, ZHANG Jiaheng, PENG Bei
- Computer Science. 2025, 52 (6A): 240600096-6. doi:10.11896/jsjkx.240600096
-
Abstract
PDF(2635KB) ( 39 )
- References | Related Articles | Metrics
-
Traveling salesman problem(TSP) is a classic combinatorial optimization problem known for its significant computational complexity.Traditional genetic algorithms heavily rely on empirical parameter adjustments when solving the TSP,and premature reduction of population diversity will lead to local convergence,severely impacting algorithm performance.Therefore,this paper proposes an adaptive hybrid genetic algorithm(AHGA) that adjusts the key parameters of genetic algorithms adaptively with reinforcement learning.Firstly,an adaptive parameter adjustment model based on genetic algorithms is constructed.Proximal policy optimization(PPO) algorithm is employed to generate action policies controlling population evolution.Secondly,a hybrid operator is introduced to traditional genetic algorithm crossover and mutation to enhance population diversity in later iterations.Finally,the effectiveness and performance of the algorithm are validated on various TSPLIB public instances.The results demonstrate that the proposed algorithm significantly improves the solution quality and convergence speed of genetic algorithms,effectively avoiding local convergence issues,and outperforms similar algorithms in solving TSP.
-
Named Entity Recognition Algorithm Based on Pre-training Model and Bidirectional TwoDimensional Convolution
林楠, 刘志慧, 杨聪. 基于预训练模型和双向二维卷积的命名实体识别算法[J]. 计算机科学, 2025, 52(6A): 240700143-6.
LIN Nan, LIU Zhihui, YANG Cong. Named Entity Recognition Algorithm Based on Pre-training Model and Bidirectional TwoDimensional Convolution[J]. Computer Science, 2025, 52(6A): 240700143-6. - LIN Nan, LIU Zhihui, YANG Cong
- Computer Science. 2025, 52 (6A): 240700143-6. doi:10.11896/jsjkx.240700143
-
Abstract
PDF(3223KB) ( 36 )
- References | Related Articles | Metrics
-
A named entity recognition algorithm BAM-TDNN based on bidirectional two-dimensional convolution and pre-training model is proposed to address the problem of semantic information weakening layer by layer when processing nested structures in named entity recognition.This algorithm first uses four word embedding strategies,namely BERT,distance,locality,and attention embedding,to extract semantic features at different levels within a sentence,and converts semantic features at multiple levels into two-dimensional semantic representations,better capturing semantic information between nested structures.Secondly,the Bi TDNN model is used to learn the long-range semantic dependencies of entities in sentences,expand the receptive field of span representation,provide more accurate semantic information between nested entities,and better understand the semantic associations between nested entities.Through evaluation on four public datasets,experimental results show that the proposed named entity recognition algorithm has achieved good performance on multiple entity recognition datasets.The accuracy,recall,and F1 value of BAM-TDNN on the ACE2005 dataset is 86.83%,87.93%,and 86.83%,respectively.The accuracy,recall,and F1 value on the GENIA dataset is 86.52%,82.37%,and 84.36%,respectively.The accuracy,recall,and F1 value on the CoNLL2003 dataset is 92.24%,93.72%,and 91.97%,respectively.
-
Multi-view CLIP and Hybrid Contrastive Learning for Multimodal Image-Text Sentiment Analysis
叶佳乐, 普园媛, 赵征鹏, 冯珏, 周联敏, 谷金晶. 混合对比学习和多视角CLIP的多模态图文情感分析[J]. 计算机科学, 2025, 52(6A): 240700060-7.
YE Jiale, PU Yuanyuan, ZHAO Zhengpeng, FENG Jue, ZHOU Lianmin, GU Jinjing. Multi-view CLIP and Hybrid Contrastive Learning for Multimodal Image-Text Sentiment Analysis[J]. Computer Science, 2025, 52(6A): 240700060-7. - YE Jiale, PU Yuanyuan, ZHAO Zhengpeng, FENG Jue, ZHOU Lianmin, GU Jinjing
- Computer Science. 2025, 52 (6A): 240700060-7. doi:10.11896/jsjkx.240700060
-
Abstract
PDF(2807KB) ( 58 )
- References | Related Articles | Metrics
-
Most of the previous multimodal image-text sentiment analysis models use different encoder structures to encode the features of images and text respectively,focusing on exploring different modal feature fusion methods to realize sentiment analysis.However,due to the differences in semantic space between independently extracted features,the semantic associations and complementarities between different features cannot be effectively captured during interaction,which reduces the accuracy of sentiment analysis in turn.To address the above problems,this paper proposes a multimodal image-text sentiment analysis method with multi-view CLIP and hybrid contrast learning.Specifically,the multi-view CLIP feature encoding module employs CLIP to jointly encode image and text representations to improve the semantic consistency of features,and performs multimodal sentiment analysis from multiple perspectives,including image,text,and image-text interaction.In addition,the hybrid contrastive learning module enables the model to extract features with more emotional characteristics and effective information to improve the robustness of the model.In order to remove redundant information in image-text interaction,this paper adopts the fusion strategy of CNN and Transformer cascade,which makes full use of local and global information of image-text to improve the feature representation capability.Finally,comprehensive experiments on three public datasets verify the superiority of the proposed method,and the ablation experiments prove the effectiveness of the components of the proposed method.
-
FB-TimesNet:An Improved Multimodal Emotion Recognition Method Based on TimesNet
李为荣, 殷继彬. FB-TimesNet:基于TimesNet改进的多模态情绪识别方法[J]. 计算机科学, 2025, 52(6A): 240900046-8.
LI Weirong, YIN Jibin. FB-TimesNet:An Improved Multimodal Emotion Recognition Method Based on TimesNet[J]. Computer Science, 2025, 52(6A): 240900046-8. - LI Weirong, YIN Jibin
- Computer Science. 2025, 52 (6A): 240900046-8. doi:10.11896/jsjkx.240900046
-
Abstract
PDF(3218KB) ( 52 )
- References | Related Articles | Metrics
-
Aiming at the limitations such as single modality of information source,poor anti-interference,high computational cost,and small attention to temporal features in the field of emotion recognition,this paper propose a hybrid emotion recognition method FB-TimesNet based on the improvement of TimesNet for facial expression and body gesture.Firstly,the human body key-point coordinates of the video frames were extracted,and the facial key-point coordinates were used as the original information features respectively,with the change value of facial key-point coordinates relative to the natural state of the change values of facial keypoint coordinates relative to the natural state,and body posture keypoint coordinates as the raw information features of facial expression and body posture,thus reducing the dimensionality of the data and computational cost.Second,the periodic changes of the input data were captured using the fast Fourier variation,which transforms the one-dimensional data into two-dimensional data,and then two-dimensional convolution kernels were used to encode and extract spatio-temporal features for the two sets of features separately to enhance the characterization ability of the data.Finally,the fusion algorithm was used to dynamically allocate the weights of each modality to obtain the best fusion effect.In this paper,extensive comparative experiments have been conducted on two common sentiment datasets,and the experimental results show that FB-TimesNe improves the classification accuracy by 4.89% compared to the baseline model on the BRED dataset.
-
Study on Algorithm for Keyword Extraction from WeChat Conversation Text
王宝会, 许卜仁, 李长傲, 叶子豪. 微信会话文本关键词提取的算法研究[J]. 计算机科学, 2025, 52(6A): 240700105-8.
WANG Baohui, XU Boren, LI Chang’ao, YE Zihao. Study on Algorithm for Keyword Extraction from WeChat Conversation Text[J]. Computer Science, 2025, 52(6A): 240700105-8. - WANG Baohui, XU Boren, LI Chang’ao, YE Zihao
- Computer Science. 2025, 52 (6A): 240700105-8. doi:10.11896/jsjkx.240700105
-
Abstract
PDF(2353KB) ( 39 )
- References | Related Articles | Metrics
-
WeChat group chats contain a large volume of conversational text data,and extracting keywords from these conversations helps to understand group dynamics and topic evolution.Traditional keyword extraction methods perform poorly due to the characteristics of WeChat conversations,such as short length,topic interleaving,and informal language use.To address these challenges,this paper proposes a multi-stage keyword extraction algorithm based on conversation topic clustering.First,we introduce a conversation topic clustering algorithm(single pass using thread segmentation and pre-training knowledge,SPTSPK),addressing the issues of topic interleaving and insufficient information by comprehensively considering semantic relevance,message activityand user intimacy.Second,we propose a multi-stage keyword extraction algorithm(MSKE) that decomposes the task into unsupervised keyword extraction and supervised keyword generation to extract both present and absent keywords from the original text,reducing the scale of candidate words and semantic redundancy.Finally,we conbine SPTSPK with MSKE to achieve keyword extraction from WeChat conversation texts.Compared to AutoKeyGen on the WeChat dataset,average F1@5 and F1@O increase by 12.8% and 10.8% respectively,and average R@10 reaches 2.59 times.Experimental results show that the proposed algorithm can effectively extract keywords from WeChat conversation texts.
-
External Knowledge Query-based for Visual Question Answering
徐钰涛, 汤守国. 基于外部知识查询的视觉问答[J]. 计算机科学, 2025, 52(6A): 240400101-8.
XU Yutao, TANG Shouguo. External Knowledge Query-based for Visual Question Answering[J]. Computer Science, 2025, 52(6A): 240400101-8. - XU Yutao, TANG Shouguo
- Computer Science. 2025, 52 (6A): 240400101-8. doi:10.11896/jsjkx.240400101
-
Abstract
PDF(2547KB) ( 40 )
- References | Related Articles | Metrics
-
To address the limitation of current visual question answering(VQA) models in handling questions that require external knowledge,this paper proposes a question-guided mechanism for querying external knowledge(QGK).The aim is to integrate key knowledge to enrich question text,thereby improving the accuracy of VQA models.We develop a question-guided external knowledge query mechanism to expand the text feature representation within the model and enhance its ability to handle complex problems.This mechanism includes a multi-stage processing method with steps for keyword extraction,query construction,and knowledge screening and refining.Besides,we introduce visual common sense features to validate the effectiveness of the proposed method.Experimental results demonstrate that the proposed query mechanism effectively provides crucial external knowledge and significantly improves model accuracy on the VQA v2.0 dataset.When the query mechanism is integrated into the baseline mo-del,the accuracy increases to 71.05%.Furthermore,combining visual common sense features with the external knowledge querymechanism boosts the model’s accuracy to 71.38%.These results confirm the significant impact of the proposed method on enhancing VQA model performance.
-
Zero-shot Stance Detection in Chinese by Fusion of Emotion Lexicon and Graph ContrastiveLearning
付书凡, 王中卿, 姜晓彤. 融合情感词典和图对比学习的中文零样本立场检测[J]. 计算机科学, 2025, 52(6A): 240500051-7.
FU Shufan, WANG Zhongqing, JIANG Xiaotong. Zero-shot Stance Detection in Chinese by Fusion of Emotion Lexicon and Graph ContrastiveLearning[J]. Computer Science, 2025, 52(6A): 240500051-7. - FU Shufan, WANG Zhongqing, JIANG Xiaotong
- Computer Science. 2025, 52 (6A): 240500051-7. doi:10.11896/jsjkx.240500051
-
Abstract
PDF(2210KB) ( 47 )
- References | Related Articles | Metrics
-
Zero-shot stance detection aims to identify the author’s attitude towards a specific target or topic when labeled data is limited or nonexistent.Currently,zero-shot stance detection methods are mainly based on attention mechanisms or the incorporation of external sentiment information.However,these methods often neglect the latent sentiment information within the original text and the semantic relationships between entities.To address this issue,a zero-shot stance detection model integrating a sentiment lexicon and graph contrastive learning(EL-CL) is proposed.It employs a chain of thought prompting method to uncover sentiment information within the original text,aiding in the construction of new input texts.During the clustering of input texts to generate prototype graphs,a sentiment lexicon is introduced to enhance the sentiment information within the text vectors of the prototype graph.Additionally,a self-supervised graph contrastive learning method is employed to augment the vectors containing sentiment features,the model’s ability to infer on unseen samples is improved.Experimental results on the public dataset NLPCC2016 Chinese Weibo stance detection demonstrate that,based on five targets,the proposed model improves the macro-F1 score by 10% over baseline models,which proves its effectiveness in zero-shot stance detection scenarios.
-
Machine Translation of English-Chinese Long Complex Sentences in Patent Integrating Terminology and Dependency Position Encoding
李永辉, 叶娜, 白宇, 张桂平. 融合术语和依存位置编码的英中专利复杂长句机器翻译[J]. 计算机科学, 2025, 52(6A): 240600098-9.
LI Yonghui, YE Na, BAI Yu, ZHANG Guiping. Machine Translation of English-Chinese Long Complex Sentences in Patent Integrating Terminology and Dependency Position Encoding[J]. Computer Science, 2025, 52(6A): 240600098-9. - LI Yonghui, YE Na, BAI Yu, ZHANG Guiping
- Computer Science. 2025, 52 (6A): 240600098-9. doi:10.11896/jsjkx.240600098
-
Abstract
PDF(2408KB) ( 55 )
- References | Related Articles | Metrics
-
Existing neural machine translation methods still face some challenges when processing long complexsentences in patent texts.This paper first quantitatively defines long complex sentences in patent texts,and proposes a neural machine translation model that incorporates terminology information and dependency position encoding to address the problems of terminology omission and mistranslation,and sentence structure mistranslation in the translation process.The model integrates the constrained term vectorization into the attention module of the encoder and decoder and the output layer,and fuses dependency position encoding at the position encoding to alleviate the long-distance dependency problem.Experiments show that the proposed model significantly improves the term translation success rate and the overall translation performance of longcomplex sentences compared with several other baseline models.
-
Aspect-level Sentiment Analysis Models Based on Syntax and Semantics
黄志勇, 李弼程, 魏巍. 融合语法和语义信息的方面级情感分析模型[J]. 计算机科学, 2025, 52(6A): 240400193-7.
HUANG Zhiyong, LI Bicheng, WEI Wei. Aspect-level Sentiment Analysis Models Based on Syntax and Semantics[J]. Computer Science, 2025, 52(6A): 240400193-7. - HUANG Zhiyong, LI Bicheng, WEI Wei
- Computer Science. 2025, 52 (6A): 240400193-7. doi:10.11896/jsjkx.240400193
-
Abstract
PDF(2061KB) ( 39 )
- References | Related Articles | Metrics
-
As more and more people express their opinions online,the prevalence of emotionally charged posts is gradually increasing.The accumulation of negative emotions may lead to the loss of control over public opinion.Accurately identifying the emotional polarity of posts can effectively analyze the current state of public opinion.Current aspect-level sentiment analysis has not effectively integrated syntactic and semantic information,failing to simultaneously consider the complementarity of grammatical structures and semantic relevance.Therefore,a model for aspect-level sentiment analysis that integrates syntax and semantics(SS-GCN) is proposed,comprising syntax analysis module,semantics analysis module,and fusion module.Firstly,the text is input to a pre-trained BERT model to obtain feature representations of syntactic relationships through the syntax analysis module.Simultaneously,the semantics analysis module,enhanced by a neighborhood enhancement mechanism,captures feature representations of semantic relevance.Finally,both representations are input to the fusion module,where under the action of affine transformation,syntactic and semantic information are effectively interacted and integrated,achieving aspect-level sentiment analysis.
-
MacBERT Based Chinese Named Entity Recognition Fusion with Dependent Syntactic Information and Multi-view Lexical Information
李代成, 李晗, 刘哲宇, 龚诗恒. 基于MacBERT的融合依存句法信息和多视角词汇信息的中文命名实体识别方法[J]. 计算机科学, 2025, 52(6A): 240600121-8.
LI Daicheng, LI Han, LIU Zheyu, GONG Shiheng. MacBERT Based Chinese Named Entity Recognition Fusion with Dependent Syntactic Information and Multi-view Lexical Information[J]. Computer Science, 2025, 52(6A): 240600121-8. - LI Daicheng, LI Han, LIU Zheyu, GONG Shiheng
- Computer Science. 2025, 52 (6A): 240600121-8. doi:10.11896/jsjkx.240600121
-
Abstract
PDF(2319KB) ( 39 )
- References | Related Articles | Metrics
-
In the Chinese environment of open entity types and complex entity structure,the Chinese named entity recognition(CNE) task encounter obvious issues,such as entity boundary judgment errors and low accuracy of entity classification.In order to solve above issues,a Chinese named entity recognition model called MacBERT-SDI-ML has been proposed,which based on the MacBERT pre-training model using characters as encoding units.Firstly,in order to extract richer Chinese semantic features and improve the accuracy of entity recognition,the model adopts MacBERT(the whole word masking for chinese BERT) as the embedding layer.Secondly,in order to further enhance entity representation characteristics and improve the accuracy of entity classification,the model utilizes a dependency syntactic information parser(SDIP) to efficiently extract more abundant dependency information of entities and integrate it into character representation.Additionally,considering the potential variation in character positions across different words,the model incorporates a multi-view lexical information fusion component(MLIF) based on self-attention mechanism to further enhance the boundary features of character representation and improve the accuracy of boundary judgment.Finally,experiment is conducted on the Weibo,OntoNotes and resume datasets,and the results show that the F1 value of the proposed model reaches 72.97%,86.56% and 98.45%,respectively.
-
Equipment Event Extraction Method Based on Semantic Enhancement
方睿, 崔良中, 方圆婧. 基于语义增强的装备事件抽取方法[J]. 计算机科学, 2025, 52(6A): 240900096-9.
FANG Rui, CUI Liangzhong, FANG Yuanjing. Equipment Event Extraction Method Based on Semantic Enhancement[J]. Computer Science, 2025, 52(6A): 240900096-9. - FANG Rui, CUI Liangzhong, FANG Yuanjing
- Computer Science. 2025, 52 (6A): 240900096-9. doi:10.11896/jsjkx.240900096
-
Abstract
PDF(2579KB) ( 35 )
- References | Related Articles | Metrics
-
In the information age,the volume of data in the equipment domain has surged dramatically,making it challenging for analysts to efficientlyextract critical information to support relevant data analyses and arguments. To address the issue of ambi-guous event argument boundaries in the extraction of events within the equipment sector,a semantic-enhanced event extraction method is proposed. This method utilizes specialized terminology and vocabulary information in the equipment domain to construct domain word vectors,and designs a model structure that can be compatible with and integrate semantic information of different granularity,fuses the equipment domain word vectors with character vectors generated by ERNIE of the pre-trained mo-del,combines the knowledge of specialized terminology with the ability of general language comprehension,and realizes a more comprehensive capturing of semantic information that enhances the model’s understanding of the textual semantics of the equipment domain,so as to improve the model’s ability to recognize the boundaries of the event thesis elements.Experimental results demonstrate that,on the equipment domain dataset,the proposed method’s F1 values outperforms baseline approaches,with an improvement of 3.83% compared to the CK-BERT model,and has been validated on the public dataset ACE2005,thereby effectively improving performance in the extraction of event elements in the equipment domain.
-
Concept Cognition for Knowledge Graphs by Mining Double Granularity Concept Characteristics
胡新, 段江丽, 黄德楠. 从挖掘双粒度概念特征的角度实现知识图谱概念认知[J]. 计算机科学, 2025, 52(6A): 240800047-6.
HU Xin, DUAN Jiangli, HUANG Denan. Concept Cognition for Knowledge Graphs by Mining Double Granularity Concept Characteristics[J]. Computer Science, 2025, 52(6A): 240800047-6. - HU Xin, DUAN Jiangli, HUANG Denan
- Computer Science. 2025, 52 (6A): 240800047-6. doi:10.11896/jsjkx.240800047
-
Abstract
PDF(2588KB) ( 64 )
- References | Related Articles | Metrics
-
Existing natural language understanding methods are based on information retrieval and matching,which don’t have cognitive ability like humans. To simulate human cognitive ability to concepts,in this paper,the main task of concept cognition for knowledge graphs is to mine double-granularity concept characteristics,frequent attributes and attribute values of concept,from two granularities,i.e.,the existence or nonexistence of attributes and the attribute value,which enables machines to distinguish or cognize concepts. Firstly,an algorithm is proposed to mine double-granularity concept characteristics from concept-relatedinformation in the knowledge graph. Secondly,to promote synergy between two granularities,the monotonicity of double-granularity attribute pattern is proposed and proven. Thirdly,to unleash the value of above monotonicity and accelerate the mining process,the representativeness of the maximal frequent attribute pattern is used. Finally,experiments verify the efficiency of the mining algorithm,the monotonicity of double-granularity attribute patterns,the representativeness of maximal frequent attribute pattern,and the cognitive ability of double-granularity concept characteristics.
-
Hippo Optimization Algorithm Improved by Multi-strategy and Multi-dimensional Fusion
任庆欣, 冯锋. 多策略多维度融合改进的河马优化算法[J]. 计算机科学, 2025, 52(6A): 240400145-8.
REN Qingxin, FENG Feng. Hippo Optimization Algorithm Improved by Multi-strategy and Multi-dimensional Fusion[J]. Computer Science, 2025, 52(6A): 240400145-8. - REN Qingxin, FENG Feng
- Computer Science. 2025, 52 (6A): 240400145-8. doi:10.11896/jsjkx.240400145
-
Abstract
PDF(3653KB) ( 57 )
- References | Related Articles | Metrics
-
This paper proposes an improved hippo optimization algorithm based on multi-strategy and multi-dimension fusion(MSMDHO) to address various issues of the original hippopotamus optimization algorithm(HO) such as slow convergence speed,susceptibility to local optima,and reliance on algorithm parameters.Firstly,a mapping technique using quasi-reverse learning is employed to generate or perturb the initial population,enhancing the quality of spatial distribution within the population.Secondly,a sine cosine optimization strategy is introduced and applied in the first stage of HO to describe the oscillatory behavior of female or immature hippos in the population positioning formula,using its oscillability to constantly detection and perturbation to achieve better optimization results.Finally,in the defense against predators and fleeing from predators stages of HO,tangent flight strategy and PID search factors are utilized to prevent the population from falling into local optima and improve overall convergence speed.In this paper,the MSMDHO algorithm,HO algorithm,multi-verse optimization(MVO)algorithm,pelican optimization algorithm(POA),rat swarm optimizer(RSO) algorithm,sailfish optimizer(SFO) algorithm,and particle swarm optimization(PSO) algorithm are tested on 8 benchmark functions.Results demonstrate that the MSMDHO algorithm outperforms other algorithms in terms of global search capability,convergence speed stability,and advancement.
-
Prediction of Moisture Content and Temperature of Tobacco Leaf Re-curing Outlet Based onImproved DBO-BP Neural Network
孙勇乾, 汤守国. 基于改进DBO-BP神经网络的烟叶复烤出口含水率和温度的预测[J]. 计算机科学, 2025, 52(6A): 240900069-7.
SUN Yongqian, TANG Shouguo. Prediction of Moisture Content and Temperature of Tobacco Leaf Re-curing Outlet Based onImproved DBO-BP Neural Network[J]. Computer Science, 2025, 52(6A): 240900069-7. - SUN Yongqian, TANG Shouguo
- Computer Science. 2025, 52 (6A): 240900069-7. doi:10.11896/jsjkx.240900069
-
Abstract
PDF(2478KB) ( 48 )
- References | Related Articles | Metrics
-
In order to improve the quality of tobacco leaves after re-roasting,this paper proposes a prediction model based on the improved dung beetle optimisation algorithm(DBO)-BP neural network,which aims to accurately predict the moisture content and temperature of the roaster outlet during the re-roasting process.Firstly,the grey correlation analysis method is used to analyse the correlation degree between the process parameters and the moisture content and temperature at the outlet of the oven,and in order to improve the prediction accuracy and stability of the model,the Circle search strategy is introduced to optimise the dung beetle algorithm,so that it could explore the solution space more effectively and avoid falling into the local optimum.Secondly,the improved dung beetle algorithm is used to optimise the weights and thresholds of the BP neural network.Finally,a prediction model of outlet moisture content and temperature of Circle-DBO-BP re-baking oven is established.The prediction results are simu-lated by MATLAB and compared with the XGBOOST model,Tent-DBO-BP model and SSA-BP model.Experimental results show that the improved Circle-DBO-BP model has an MSE of 0.046 7 and 0.038 4 in the prediction of moisture content and temperature at the tobacco leaf re-roasting outlet,respectively,which provides strong support for the control of the tobacco leaf re-roasting process.
-
Study on Regional Cold Chain Multimodal Transport Routes Considering Multiple Tasks
石坤, 李德仓, 孟晏冰, 刘亚彤. 考虑多任务的区域冷链多式联运路径研究[J]. 计算机科学, 2025, 52(6A): 240600160-6.
SHI Kun, LI Decang, MENG Yanbing, LIU Yatong. Study on Regional Cold Chain Multimodal Transport Routes Considering Multiple Tasks[J]. Computer Science, 2025, 52(6A): 240600160-6. - SHI Kun, LI Decang, MENG Yanbing, LIU Yatong
- Computer Science. 2025, 52 (6A): 240600160-6. doi:10.11896/jsjkx.240600160
-
Abstract
PDF(1789KB) ( 40 )
- References | Related Articles | Metrics
-
According to the characteristics of high transportation cost,strong timeliness and high carbon emissions of cold chain logistics transportation,considering the constraints of multi task,carbon emissions,customer satisfaction and transportation arc capacity,a cold chain multimodal transportation path optimization model with the minimum total cost and the highest customer satisfaction is constructed and solved by genetic simulated annealing algorithm.Taking the China regional multimodal transport network from Nanning to Harbin as an example,the paper makes a decision on the intermodal transport scheme,and analyzes the sensitivity of the average temperature of node cities.The results show that the proposed algorithm has faster convergence speed and higher accuracy than the traditional genetic algorithm,and can effectively solve the model.The total satisfaction of the three transportation tasks is 0.53,0.86 and 0.75,respectively.The types of fresh food,customer timeliness requirements and transportation arc capacity will have an impact on the intermodal scheme.The sensitivity analysis results show that: with the increase of urban average temperature,customer satisfaction shows a downward trend.The results of this study can provide some reference for the cold chain multimodal transport route selection under different transportation scenarios.
-
Design of Autonomous Decision for Trajectory Optimization of Intelligent Morphing Aircraft
徐丹, 王江涛. 智能变形飞行器自主决策轨迹优化方法设计[J]. 计算机科学, 2025, 52(6A): 240600068-7.
XU Dan, WANG Jiangtao. Design of Autonomous Decision for Trajectory Optimization of Intelligent Morphing Aircraft[J]. Computer Science, 2025, 52(6A): 240600068-7. - XU Dan, WANG Jiangtao
- Computer Science. 2025, 52 (6A): 240600068-7. doi:10.11896/jsjkx.240600068
-
Abstract
PDF(3764KB) ( 40 )
- References | Related Articles | Metrics
-
Intelligent morphing aircraft is a new generation of aircraft that can timely and autonomously change the structural shape according to the changes of flight mission and environment,and meet the requirements of different flight stages with diffe-rent aerodynamic layouts.It is considered as one of the development trends that is most likely to bring about the technological change of aerospace vehicles in the future.However,large structural deformation makes it difficult to establish an accurate mathematical model.Therefore,it is proposed to use the model-free reinforcement learning(RL) algorithm to realize the autonomous decision-making of trajectory optimization through interactive learning.This paper takes intelligent morphing aircraft flying at high speed in large airspace as the research object,aiming at the technical problems that it is difficult to obtain sufficient defor-mable flight test data in advance,which makes it difficult to predict the optimal aerodynamic shape under different flight states,a deformable decision optimization design scheme based on RL network model is proposed.The deformations can be made independently according to real-time conditions during flight,so as to achieve the mission objectives of improving aerodynamic performance and optimizing flight trajectory.
-
UAV Path Planning Based on Improved Dung Beetle Optimization Algorithm
叶明君, 王姝鉴. 基于改进蜣螂优化算法的无人机路径规划[J]. 计算机科学, 2025, 52(6A): 240900136-6.
YE Mingjun, WANG Shujian. UAV Path Planning Based on Improved Dung Beetle Optimization Algorithm[J]. Computer Science, 2025, 52(6A): 240900136-6. - YE Mingjun, WANG Shujian
- Computer Science. 2025, 52 (6A): 240900136-6. doi:10.11896/jsjkx.240900136
-
Abstract
PDF(3549KB) ( 45 )
- References | Related Articles | Metrics
-
In the context of the rapid development of UAV technology,efficient path planning strategies have become the key to improve the effectiveness and safety of UAV mission execution.In this paper,a multi-strategy dung beetle optimization(MDBO) algorithm is proposed to tackle this issue.The MDBO algorithm incorporates the Latin hypercubic sampling initialization strategy,mean difference variational strategy and the fusion lens imaging backward learning and dimension-by-dimension optimization strategies,it significantly improves the convergence accuracy and convergence speed of the algorithm,and enhances the global optimization capability.The MDBO is compared with the DBO,COA and GWO algorithms on the UAV path planning problem through MATLAB simulation experiments.And the experimental results demonstrate that for the two constructed maps,the average value of the flight path length solved by MDBO is reduced by 5.1% and 5.9% compared with DBO,and it has good convergence speed and stability,which verifies the effectiveness and superiority of the improved algorithm.
-
Research on Automatic Generation Method of Fault Tree Based on Network Decomposition
缪广宇, 神策, 方博杨. 基于网络分解的故障树自动生成方法研究[J]. 计算机科学, 2025, 52(6A): 240900108-6.
MIAO Guangyu, SHEN Ce, FANG Boyang. Research on Automatic Generation Method of Fault Tree Based on Network Decomposition[J]. Computer Science, 2025, 52(6A): 240900108-6. - MIAO Guangyu, SHEN Ce, FANG Boyang
- Computer Science. 2025, 52 (6A): 240900108-6. doi:10.11896/jsjkx.240900108
-
Abstract
PDF(2369KB) ( 37 )
- References | Related Articles | Metrics
-
As the modern aviation industry continues to evolve,flight simulators have become increasingly crucial for pilottrai-ning,system testing,and fault diagnosis.Among the tools used to enhance pilots' ability to handle abnormal conditions,fault tree analysis plays a pivotal role in ensuring the rationality of simulator design.Addressing the redundancy backup design in aviation systems,we propose a fault tree generation method based on network decomposition.This method takes system structure diagrams and target nodes as input,analyzes the connectivity paths within the network,and generates intuitive and clear fault trees.Common logical gates such as AND and OR are employed for readability and ease of integration with other software.The algorithm not only reduces manual workload but also enhances the efficiency and accuracy of simulator design.Additionally,we introduce a weighted node selection rule and simplify common topologies to optimize network decomposition efficiency.Finally,through examples using partial network diagrams of aircraft systems,we validate the correctness and performance of the proposed algorithm.
-
Self-matching Method of Virtual Terminals of Intelligent Stations Based on K-nearest Neighbor Weighting Algorithm
史卓鹏, 孔祥敏, 魏佳红, 宋晓帆. 基于K-近邻加权算法的智能站虚端子自匹配方法[J]. 计算机科学, 2025, 52(6A): 240600039-6.
SHI Zhuopeng, KONG Xiangmin, WEI Jiahong, SONG Xiaofan. Self-matching Method of Virtual Terminals of Intelligent Stations Based on K-nearest Neighbor Weighting Algorithm[J]. Computer Science, 2025, 52(6A): 240600039-6. - SHI Zhuopeng, KONG Xiangmin, WEI Jiahong, SONG Xiaofan
- Computer Science. 2025, 52 (6A): 240600039-6. doi:10.11896/jsjkx.240600039
-
Abstract
PDF(1907KB) ( 37 )
- References | Related Articles | Metrics
-
In order to solve the problems of frequent connection errors and repeated verification of the virtual terminal circuit of intelligent substation in engineering design,a self-matching method of virtual terminal of intelligent station based on K-nearest neighbor weighting algorithm is proposed.By decomposing the whole station virtual terminal matching problem of the intelligent substation into a typical interval and a single sending and receiving virtual terminal matching connection problem in a single intelligent electronic device(IED),introducing the format composition and connection of virtual terminals,to construct a mathematical analysis model.According to the distance measurement of the attribute connection between GOOSE and SV input and output virtual terminals between IED devices,the distance weight of the attributes is increased by the simulated annealing optimization method to improve the algorithm selection proximity,and the classification decision rule of the K-nearest neighbor algorithm is used to automatically match the corresponding imaginary terminal connection combinations.The accuracy and efficiency of the algorithm are verified by engineering test examples,which improves the connection accuracy of the invisible loop of the intelligent substation and ensures the safe and stable operation of the power grid.
-
Review of Concrete Defect Detection Methods Based on Deep Learning
王嘉敏, 武文红, 牛恒茂, 石宝, 乌尼尔, 郝旭, 张超, 付荣升. 基于深度学习的混凝土缺陷检测方法综述[J]. 计算机科学, 2025, 52(6A): 240900137-12.
WANG Jiamin, WU Wenhong, NIU Hengmao, SHI Bao, WU Nier, HAO Xu, ZHANG Chao, FU Rongsheng. Review of Concrete Defect Detection Methods Based on Deep Learning[J]. Computer Science, 2025, 52(6A): 240900137-12. - WANG Jiamin, WU Wenhong, NIU Hengmao, SHI Bao, WU Nier, HAO Xu, ZHANG Chao, FU Rongsheng
- Computer Science. 2025, 52 (6A): 240900137-12. doi:10.11896/jsjkx.240900137
-
Abstract
PDF(2755KB) ( 36 )
- References | Related Articles | Metrics
-
Concrete defect detection based on deep learning can effectively reduce infrastructure operation risks and save maintenance costs by providing an initial assessment of structural conditions.This paper analyzes the research progress of concrete defect detection technologies in recent years,analyzes the existing achievements of related researches,and discusses and compares the differences,advantages and disadvantages of various detection methods.The image datasets that can be used for concrete defect detection are sorted out and introduced.Then,starting from the practical application,the possible problems in concrete defect detection are sorted out,and the related research that can solve the corresponding detection problems is expounded and analyzed.Finally,the possible future development directions of the research are prospected.
-
Survey of Man-Machine Distance Detection Method in Construction Site
郝旭, 武文红, 牛恒茂, 石宝, 乌尼尔, 王嘉敏, 褚宏坤. 施工现场的人机距离检测方法综述[J]. 计算机科学, 2025, 52(6A): 240700098-10.
HAO Xu, WU Wenhong, NIU Hengmao, SHI Bao, WU Nier, WANG Jiamin, CHU Hongkun. Survey of Man-Machine Distance Detection Method in Construction Site[J]. Computer Science, 2025, 52(6A): 240700098-10. - HAO Xu, WU Wenhong, NIU Hengmao, SHI Bao, WU Nier, WANG Jiamin, CHU Hongkun
- Computer Science. 2025, 52 (6A): 240700098-10. doi:10.11896/jsjkx.240700098
-
Abstract
PDF(2358KB) ( 45 )
- References | Related Articles | Metrics
-
With the development of the construction industry,the use of construction machinery is becoming more and more frequent,and the resulting safety problems are becoming more and more serious.In recent years,among the production safety accidents that have occurred nationwide,construction lifting machinery accidents account for a significant proportion.Therefore,how to effectively monitor and prevent the potential risks between construction site workers and construction machinery become a hot topic of current research.Firstly,this paper systematically summarizes the distance detection technologies between workers and construction machinery based on positioning technology and deep learning method,focusing on the method of deep learning and expounding its key technologies.Secondly,according to the distance detection method,the research status at home and abroad is summarized,and the advantages and limitations of each method are compared and analyzed.Then,through the challenges faced by the current research,the corresponding improvement strategies are proposed.Finally,the future development trend is given for the follow-up research,which provides valuable reference for researchers in related fields.
-
Transmission Line Fault Identification Method Based on Transfer Learning and Improved YOLOv8s
黄柏澄, 王晓龙, 安国成, 张涛. 基于迁移学习与改进YOLOv8s的输电线路故障识别方法[J]. 计算机科学, 2025, 52(6A): 240800044-8.
HUANG Bocheng, WANG Xiaolong, AN Guocheng, ZHANG Tao. Transmission Line Fault Identification Method Based on Transfer Learning and Improved YOLOv8s[J]. Computer Science, 2025, 52(6A): 240800044-8. - HUANG Bocheng, WANG Xiaolong, AN Guocheng, ZHANG Tao
- Computer Science. 2025, 52 (6A): 240800044-8. doi:10.11896/jsjkx.240800044
-
Abstract
PDF(4408KB) ( 39 )
- References | Related Articles | Metrics
-
At present,there are serious problems when identifying some fault categories in transmission lines,such as insufficient samples and difficulty in locating small targets at long distances captured by drones,resulting in low accuracy in fault identification of transmission lines.To address the above issues,a novel transmission line fault identification method based on transfer learning and improved YOLOv8s is proposed.Firstly,taking YOLOv8s as the baseline,the transfer learning method is used to improve the recognition performance in few-shot,and a bidirectional correlation sample selection module is proposed to obtain sample categories with strong correlation with the target domain,avoiding the problem of negative transfer that may occur when using transfer learning,effectively improving the fault recognition performance of the model.Secondly,aiming at the difficult problem of small target localization,after fusing the 80*80 output feature map and the shallow feature map,EMA is introduced into multi-scale attention to enhance the feature information of the small target for designing small target attention detection layer.To improve the loss function,the CIoU loss is replaced by NWD loss in the prediction frame regression loss,which solves the problem that IoU is sensitive to small target position deviation.Specifically,the Wasserstein distance is used to measure the similarity between prediction box and the truth box.Experimental results show that in the case of few shot and small targets,the proposed method has a mAP of 51.1% in the transmission line fault dataset,which is 8.2% higher than that of the YOLOv8s baseline model,effectively improving the accuracy of fault recognition and providing new solution and method for few-shot and small target transmission line fault identification.
-
Image Classification Model for Waste Household Appliance Recycling Based on Multi-scaleDepthwise Separable ResNet
雷帅, 仇明鑫, 柳先辉, 张颖瑶. 基于多尺度深度可分离ResNet的废弃家电回收图像分类模型[J]. 计算机科学, 2025, 52(6A): 240500057-7.
LEI Shuai, QIU Mingxin, LIU Xianhui, ZHANG Yingyao. Image Classification Model for Waste Household Appliance Recycling Based on Multi-scaleDepthwise Separable ResNet[J]. Computer Science, 2025, 52(6A): 240500057-7. - LEI Shuai, QIU Mingxin, LIU Xianhui, ZHANG Yingyao
- Computer Science. 2025, 52 (6A): 240500057-7. doi:10.11896/jsjkx.240500057
-
Abstract
PDF(3063KB) ( 53 )
- References | Related Articles | Metrics
-
In response to the challenge of effectively utilizing a massive amount of images in discarded household appliances recycling techniques,a discarded household appliance image recognition model,named ME-ResNet(multi-scale and efficient ResNet),is proposed based on ResNet and multi-scale convolution.Firstly,a multi-scale convolution module is designed using a residual structure to enhance the model's capability in extracting feature information across different scales.Building upon this,the ME-ResNet model is specifically designed for the classification of discarded household appliance images based on ResNet.Secondly,lightweighting of the ME-ResNet model is achieved by replacing certain convolutional layers in multi-scale convolution with depthwise separable convolution.Finally,the performance of ME-ResNet and its lightweight variant are validated through comparative experiments with other convolutional neural networks.Research results demonstrate that both ME-ResNet and its lightweight model effectively improve recognition accuracy.Compared to the classical convolutional neural network ResNet34,ME-ResNet and its lightweight version achieve respective optimal accuracy increases of 1.2% and 0.3%,macro-precision increases of 1.7% and 0.9%,macro-recall increases of 1.3% and 0.2%,and macro-F1 score increases of 1.5% and 0.5%,respectively.
-
Human Target Detection Algorithm for Low-quality Laser Through-window Imaging
伍智华, 程江华, 刘通, 蔡亚辉, 程榜, 潘乐昊. 激光透窗低质量成像人体目标检测算法[J]. 计算机科学, 2025, 52(6A): 240600069-6.
WU Zhihua, CHENG Jianghua, LIU Tong, CAI Yahui, CHENG Bang, PAN Lehao. Human Target Detection Algorithm for Low-quality Laser Through-window Imaging[J]. Computer Science, 2025, 52(6A): 240600069-6. - WU Zhihua, CHENG Jianghua, LIU Tong, CAI Yahui, CHENG Bang, PAN Lehao
- Computer Science. 2025, 52 (6A): 240600069-6. doi:10.11896/jsjkx.240600069
-
Abstract
PDF(3846KB) ( 36 )
- References | Related Articles | Metrics
-
In response to the challenges of inaccurate detection and low recognition rates in human target detection under low-quality imaging with laser through-window technology,an enhanced target detection algorithm,YOLO-TC,based on YOLOv8n optimization,has been proposed.The feature extraction module of the backbone has been redesigned to enhance the model’s multi-scale feature representation capability.Pruning of the YOLOv8n model has been employed to optimize the network structure,reduce model complexity,and enhance detection accuracy.An EMA attention mechanism module has been introduced between the C2f module and the detection head(Detect) to improve semantic and location information in feature fusion and enhance the model’s feature fusion ability.Using SIoU bounding box regression loss function instead of the original loss function to improve the inference accuracy and training speed of the algorithm.Experimental results on a laser through-window imaging dataset demonstrate that the Precision,Recall,and mean Average Precision(mAP) of the improved model has increased by 7.7%,5.9%,and 7.0% respectively.Furthermore,the model size has been reduced by 34.6% compared to the original model,making it suitable for subsequent edge hardware deployment.
-
Aircraft Landing Gear Safety Pin Detection Algorithm Based on Improved YOlOv5s
陈世嘉, 叶剑元, 龚轩, 曾康, 倪鹏程. 基于改进YOLOv5s的飞机起落架安全销检测算法[J]. 计算机科学, 2025, 52(6A): 240400189-7.
CHEN Shijia, YE Jianyuan, GONG Xuan, ZENG Kang, NI Pengcheng. Aircraft Landing Gear Safety Pin Detection Algorithm Based on Improved YOlOv5s[J]. Computer Science, 2025, 52(6A): 240400189-7. - CHEN Shijia, YE Jianyuan, GONG Xuan, ZENG Kang, NI Pengcheng
- Computer Science. 2025, 52 (6A): 240400189-7. doi:10.11896/jsjkx.240400189
-
Abstract
PDF(3677KB) ( 40 )
- References | Related Articles | Metrics
-
Aircraft landing gear safety pin is a kind of aircraft safety protection device.Before takeoff,it should be ensured that the safety pin is pulled out,so as to protect the safety of aircraft flight.The traditional aircraft landing gear safety pin inspection method is based on manual patrol,which usually produces safety hazards due to human factors and is also inefficient.In order to solve this problem,deep learning-based target detection algorithms are applied to aircraft landing gear safety pin inspection for the first time and optimised in terms of lightweight and performance of the algorithm model to better meet the inspection task in terms of arithmetic resources,storage resources and algorithm performance.Based on the industrial-grade deep learning target detection model YOLOv5,the model is improved in terms of lightweighting while MobileNetV3 is introduced as the backbone network for feature extraction,which greatly reduces the parameters and the GFLOPs while ensuring the accuracy,and in terms of algorithmic performance,the lightweight coordinate attention module is inserted to help the algorithmic network to locate the target more accurately and to improve the accuracy of target detection.Experimental results show that the improved YOLOv5 model can effectively perform the aircraft landing gear safety pin detection task.Compared with the pre-optimization model,the mAP is increased by 2.5%,F1 score is increased by 1.4%,and the parameter is reduced by 50%,GFLOPs are reduced by 61%.The algorithm can provide a reference for automatic aircraft landing gear safety pin detection methods.
-
Material SEM Image Retrieval Method Based on Multi-scale Features and Enhanced HybridAttention Mechanism
曾凡运, 廉贺淳, 冯珊珊, 王庆梅. 基于多尺度特征和增强混合注意力机制的材料SEM图像检索方法[J]. 计算机科学, 2025, 52(6A): 240800014-7.
ZENG Fanyun, LIAN Hechun, FENG Shanshan, WANG Qingmei. Material SEM Image Retrieval Method Based on Multi-scale Features and Enhanced HybridAttention Mechanism[J]. Computer Science, 2025, 52(6A): 240800014-7. - ZENG Fanyun, LIAN Hechun, FENG Shanshan, WANG Qingmei
- Computer Science. 2025, 52 (6A): 240800014-7. doi:10.11896/jsjkx.240800014
-
Abstract
PDF(2700KB) ( 41 )
- References | Related Articles | Metrics
-
Material SEM images are rich in content,and traditional retrieval methods and general-domain retrieval methods are easily affected byvarious factors such as image distortion and complex textures in image feature extraction,resulting in suboptimal extraction of key features.Aiming at the shortcomimgs of conventional methods in feature extraction and efficient retrieval of material SEM images,this paper proposes an image retrieval method based on multi-scale feature information,integratingAtrous Spatial Pyramid Pooling(ASPP) and an enhanced convolutional block attention module(ECBAM).This method employs the ConvNeXt network for feature extraction,leveraging the advantages of dilated convolutions with large receptive fields and residual networks to capture more details and complex textures,effectively extracting both local and global features.Additionally,by incorporating the latest Mamba module and modifying it into a bidirectional architecture to integrate CBAM,the enhanced mixed attention mechanism ECBAM is proposed.The combination of ASPP and ECBAM ensures stable and efficient feature fusion and enhancement.Experimental results demonstrate that this method achieves superior retrieval performance on material SEM image datasets,with an average retrieval accuracy improvement of 1.5% compared to mainstream retrieval methods.
-
Improved Helmet Detection Algorithm of Electric Bicycle Based on YOLOv8n
白少康, 王宝会, 陈继轩. 基于YOLOv8n的电动自行车佩戴头盔检测算法改进[J]. 计算机科学, 2025, 52(6A): 240900167-8.
BAI Shaokang, WANG Baohui, CHEN Jixuan. Improved Helmet Detection Algorithm of Electric Bicycle Based on YOLOv8n[J]. Computer Science, 2025, 52(6A): 240900167-8. - BAI Shaokang, WANG Baohui, CHEN Jixuan
- Computer Science. 2025, 52 (6A): 240900167-8. doi:10.11896/jsjkx.240900167
-
Abstract
PDF(2916KB) ( 56 )
- References | Related Articles | Metrics
-
With the continuous advancement of the transportation industry,the necessity of helmet-wearing during the riding of electric bicycles has been consistently verified.Meanwhile,the detection of helmet-wearing by electric bicycle riders has also received extensive research in the domain of deep learning.Currently,there exist issues such as significant detection difficulties in dense scenarios and challenges in detecting small targets when it comes to helmet-wearing by electric bicycle riders.Concurrently,in order to achieve more effective deployment on mobile terminals,a lightweight model needs to be selected.Hence,an improved detection algorithm for helmet-wearing by electric bicycle riders based on YOLOv8n has been proposed.Firstly,YOLOv8n is the most lightweight model among the YOLOv8 series,enabling better deployment on mobile devices.Additionally,a switchable atrous convolution is introduced into the backbone network of YOLOv8n,enhancing the feature extraction capability of YOLOv8n in dense scenes without increasing the computational load.At the end of the image pyramid feature fusion network of YOLOv8n,a triple attention mechanism is integrated to strengthen the extraction ability of important features from the fused information of different-scale features by the YOLOv8n model.Finally,the fusion of large-sized feature information with small-sized feature information is added to improve the detection effect of small targets.Ultimately,while ensuring the lightweight nature of YOLOv8n,when using the improved model in the self-constructed validation set,the mAP@50,mAP@50-90,and recall rate have increased by 2.7%,2.8% and 2.5%,respectively.Therefore,the proposed method holds certain practical and scientific significance.
-
YOLO-BFEPS:Efficient Attention-enhanced Cross-scale YOLOv10 Fire Detection Model
高均益, 张伟, 李泽麟. YOLO-BFEPS:一种高效注意力增强的跨尺度YOLOv10火灾检测模型[J]. 计算机科学, 2025, 52(6A): 240800134-9.
GAO Junyi, ZHANG Wei, LI Zelin. YOLO-BFEPS:Efficient Attention-enhanced Cross-scale YOLOv10 Fire Detection Model[J]. Computer Science, 2025, 52(6A): 240800134-9. - GAO Junyi, ZHANG Wei, LI Zelin
- Computer Science. 2025, 52 (6A): 240800134-9. doi:10.11896/jsjkx.240800134
-
Abstract
PDF(3698KB) ( 53 )
- References | Related Articles | Metrics
-
In order to solve the problems of early warning delay and reduced recognition accuracy of traditional fire detection models caused by insufficient feature extraction and excessive model complexity when dealing with complex scenes,a target detection model based on improved YOLOv10,which can be deployed on terminal devices,is proposed to achieve rapid and accurate detection of both smoke and fire. It is named YOLO-BFEPS(YOLO bi-directional fusion with enhanced partial self-attention) new fire detection model. Firstly,the PSA module is improved to enhance spatial semantic feature extraction,solve the problems of information loss and increased computational complexity caused by channel dimensionality reduction modeling cross-channel relationships,improve detection accuracy,and record the improved module as E-PSA(enhanced partial self-attention). Secondly,scale fusion is carried out based on BiFPN’s idea of bidirectional cross-connection of feature layers,and the neck structure of YOLOv10 is redesigned,and the fusion of information from low feature layers is innovatively increased,which greatly reduce the model parameters and computational complexity while maintaining accuracy. The bottleneck structure of C2f module is replaced by a faster block structure,the lightweight design of the model is implemented and it is called C2f-Faster. Finally,experiments are carried out to verify the effectiveness of the proposed model on multiple datasets. The results show that the proposed model can improve the Precision and mAP@0.5 by 5.9% and 1.4% respectively on the basis of reducing the number of parameters by 35.5% and the computational complexity by 17.6%
-
Visual Question Answering Integrating Visual Common Sense Features and Gated Counting Module
徐钰涛, 汤守国. 融合视觉常识特征和门控计数方法的视觉问答[J]. 计算机科学, 2025, 52(6A): 240800086-7.
XU Yutao, TANG Shouguo. Visual Question Answering Integrating Visual Common Sense Features and Gated Counting Module[J]. Computer Science, 2025, 52(6A): 240800086-7. - XU Yutao, TANG Shouguo
- Computer Science. 2025, 52 (6A): 240800086-7. doi:10.11896/jsjkx.240800086
-
Abstract
PDF(2827KB) ( 39 )
- References | Related Articles | Metrics
-
To better explore potential common sense information in images,this paper introduces an innovative Visual common sense feature for the visual question answering(VQA) task,and effectively integrate the bottom-up feature with the visual common sense feature through the visual feature fusion module.Thus,rich visual feature representation is realized.Guided attention fusion method,through the input of bottom-up features and visual common sense features into the information interaction mo-dule,enables the attention mechanism to capture the image content more relevant to the problem text.On this basis,this paper also designs and introduces a gated counting module(GCM) to retain the number of entities in image features.This module significantly improves model performance on counting problems while maintaining information integrity and relevance.Compared to traditional methods,GCM is able to handle visual problems involving quantities more accurately,thus enhancing the accuracy of the overall VQA task.Finally,we have carried out a large number of experiments on the widely used dataset VQA v2.0 and obtained relatively good results.
-
Distillation Method for Text-to-Audio Generation Based on Balanced SNR-aware
刘炳志, 曹寅, 周翊. 基于平衡信噪比感知的文本到音频生成蒸馏方法[J]. 计算机科学, 2025, 52(6A): 240900125-5.
LIU Bingzhi, CAO Yin, ZHOU Yi. Distillation Method for Text-to-Audio Generation Based on Balanced SNR-aware[J]. Computer Science, 2025, 52(6A): 240900125-5. - LIU Bingzhi, CAO Yin, ZHOU Yi
- Computer Science. 2025, 52 (6A): 240900125-5. doi:10.11896/jsjkx.240900125
-
Abstract
PDF(2290KB) ( 35 )
- References | Related Articles | Metrics
-
Diffusion models have demonstrated promising results in text-to-audio(TTA) generation tasks.However,their practical usability is limited by slow sampling speeds,which restricts their applicability in high-throughput scenarios.To address this issue,progressive distillation methods have been applied to effectively create more streamlined and efficient models.Nevertheless,these methods encounter issues of unbalanced weights at both high and low noise levels,potentially impacting the quality of gene-rated samples.In this paper,we propose a method to balanced SNR-aware,which is an enhanced loss-weighting mechanism for diffusion distillation and employs a balanced approach to weight the loss for both high and low noise levels.We evaluate the proposed method on the AudioCaps dataset,the experimental results showing superior performance during the reverse diffusion process compared to previous distillation methods with the same number of sampling steps.Furthermore,the BSA method allows for a significant reduction in sampling steps from 200 to 25,with minimal performance degradation compared to the original teacher models.
-
Human Pose Estimation Using Millimeter Wave Radar Based on Transformer and PointNet++
李阳, 刘毅, 李浩, 张刚, 徐明枫, 郝崇清. 基于Transformer和PointNet++的毫米波雷达人体姿态估计[J]. 计算机科学, 2025, 52(6A): 240400169-9.
LI Yang, LIU Yi, LI Hao, ZHANG Gang, XU Mingfeng, HAO Chongqing. Human Pose Estimation Using Millimeter Wave Radar Based on Transformer and PointNet++[J]. Computer Science, 2025, 52(6A): 240400169-9. - LI Yang, LIU Yi, LI Hao, ZHANG Gang, XU Mingfeng, HAO Chongqing
- Computer Science. 2025, 52 (6A): 240400169-9. doi:10.11896/jsjkx.240400169
-
Abstract
PDF(3713KB) ( 58 )
- References | Related Articles | Metrics
-
Human pose estimation,as a hot research topic in the field of action recognition,is widely applied in medical,security,and monitoring fields,and is of great significance for promoting the intelligent development of related industries.However,currently image-based human pose estimation has high environmental requirements and poor privacy.Based on this,a human pose estimation method based on millimeter wave radar point cloud is proposed.This method uses PointNet++ to extract features from millimeter wave radar point cloud.Compared with CNN based pose estimation methods,it has lower MSE,MAE,and RMSE values at each joint point.In addition,to solve the problem of sparse point clouds in millimeter wave radar,a multi frame point cloud stitching strategy is used to increase the number of point clouds.The model that concatenates three frame point clouds as input reduces the MSE and MAE values by 0.22 cm and 0.72 cm respectively compared to the original model,effectively alleviating the problem of excessively sparse point clouds.Finally,in order to fully utilize the temporal features between different point clouds,Transformer is combined with PointNet++,and the effectiveness of the multi frame point cloud stitching strategy and the addition of Transformer structure are demonstrated through ablation experiments.The MSE and MAE values reaches 0.59 cm and 5.41 cm respectively,providing a new approach for achieving better performance RF human pose estimation.
-
TalentDepth:A Monocular Depth Estimation Model for Complex Weather Scenarios Based onMultiscale Attention Mechanism
张航, 卫守林, 殷继彬. TalentDepth:基于多尺度注意力机制的复杂天气场景单目深度估计模型[J]. 计算机科学, 2025, 52(6A): 240900126-7.
ZHANG Hang, WEI Shoulin, YIN Jibin. TalentDepth:A Monocular Depth Estimation Model for Complex Weather Scenarios Based onMultiscale Attention Mechanism[J]. Computer Science, 2025, 52(6A): 240900126-7. - ZHANG Hang, WEI Shoulin, YIN Jibin
- Computer Science. 2025, 52 (6A): 240900126-7. doi:10.11896/jsjkx.240900126
-
Abstract
PDF(2938KB) ( 39 )
- References | Related Articles | Metrics
-
For the problem of inaccurate prediction of depth information caused by blurred,low-contrast and color distortion of complex weather scene images,previous studies have used the depth map of a standard scene as the a priori information for depth estimation of such scenes.However,this approach suffers from problems such as low accuracy of a priori information.This paper proposed a monocular depth estimation model TalentDepth based on a multiscale attention mechanism to realize the prediction of complex weather scenes.First,the multiscale attention mechanism was fused in the encoder to reduce the computational cost while retaining the information of each channel to improve the efficiency and capability of feature extraction.Second,to address the problem of unclear image depth,based on geometric consistency,a Depth Region Refinement(DSR) module was proposed to filter inaccurate pixel points in order to improve the reliability of depth information.Finally,the complex samples generated by the image translation model are input and the standard loss on the corresponding original images is calculated to guide the self-supervised training of the model.On the three datasets,NuScence,KITTI and KITTI-C,the error and accuracy are optimized compared to the baseline model.
-
High Quality Image Generation Method Based on Improved Diffusion Model
侯哲晓, 李弼程, 蔡炳炎, 许逸飞. 基于改进扩散模型的高质量图像生成方法[J]. 计算机科学, 2025, 52(6A): 240500094-9.
HOU Zhexiao, LI Bicheng, CAI Bingyan, XU Yifei. High Quality Image Generation Method Based on Improved Diffusion Model[J]. Computer Science, 2025, 52(6A): 240500094-9. - HOU Zhexiao, LI Bicheng, CAI Bingyan, XU Yifei
- Computer Science. 2025, 52 (6A): 240500094-9. doi:10.11896/jsjkx.240500094
-
Abstract
PDF(6289KB) ( 42 )
- References | Related Articles | Metrics
-
Image generation is the research focus of AIGC in AI2.0 era,and the iteration of generation model promotes the deve-lopment of image generation technology.At present,the sample quality of mainstream generation models is low,which can not meet the high fidelity requirements of AIGC for images,and the emerging diffusion model cannot achieve high quality generation in unconditional generation.Therefore,this paper proposes a high quality image generation method based on improved diffusion model.Firstly,the diffusion model with stable training and excellent sampling quality is used as the benchmark model.Secondly,the self-attention mechanism in the diffusion model is used to guide the noise generation,so as to restore the low-frequency content in the image and enhance the stability of the denoising process.Finally,the recursive feature pyramid is integrated into the noise predictor structure,and the image feature information is repeatedly purified to capture the rich high-frequency details in the image.Comparison experiments and ablation experiments are performed on three standard datasets and four small datasets.The results show that the proposed method exhibits better performance than other mothods.
-
Multi-feature Fusion and Ensemble Learning-based Wind Turbine Blade Defect Detection Method
王瑞, 汤占军. 基于多特征融合与集成学习的风机叶片缺陷检测方法[J]. 计算机科学, 2025, 52(6A): 240900138-8.
WANG Rui, TANG Zhanjun. Multi-feature Fusion and Ensemble Learning-based Wind Turbine Blade Defect Detection Method[J]. Computer Science, 2025, 52(6A): 240900138-8. - WANG Rui, TANG Zhanjun
- Computer Science. 2025, 52 (6A): 240900138-8. doi:10.11896/jsjkx.240900138
-
Abstract
PDF(3438KB) ( 41 )
- References | Related Articles | Metrics
-
To address the challenges of complex feature handling and diverse defect representations in wind turbine blade surface defect detection using drones,this paper introduces a novel approach based on multi-feature fusion and ensemble learning.The proposed method integrates local LBP features,HOG features,and high-level features from capsule networks into a comprehensive multi-feature extraction model,enhancing detail resolution.Additionally,three base classifiers with distinct bias and variance characteristics-support vector machine(SVM),k-nearest neighbors(KNN),and decision tree(DT)-can utilized to construct a heterogeneous ensemble learning model,leveraging the strengths of each base model to improve overall performance.Validation on a wind turbine blade surface defect dataset reveals that the multi-feature extraction model(MFEM) achieves an average precision(AP) of 98%,outperforming YOLOv7 and Faster R-CNN by 3.1% and 5.8%,respectively,and demonstrating substantial improvements over individual SVM,KNN,and DT models.Ablation studies further confirm the effectiveness of the proposed model.The results underscore the superior performance of the MFEM in wind turbine blade defect detection tasks.
-
High-precision and Real-time Detection Algorithm for Photovoltaic Glass Edge Defects Based onFeature Reuse and Cheap Operation
丁绪星, 周学顶, 钱强, 任悦悦, 冯友宏. 结合特征复用和廉价操作的高精度光伏玻璃边部缺陷实时检测算法[J]. 计算机科学, 2025, 52(6A): 240400146-10.
DING Xuxing, ZHOU Xueding, QIAN Qiang, REN Yueyue, FENG Youhong. High-precision and Real-time Detection Algorithm for Photovoltaic Glass Edge Defects Based onFeature Reuse and Cheap Operation[J]. Computer Science, 2025, 52(6A): 240400146-10. - DING Xuxing, ZHOU Xueding, QIAN Qiang, REN Yueyue, FENG Youhong
- Computer Science. 2025, 52 (6A): 240400146-10. doi:10.11896/jsjkx.240400146
-
Abstract
PDF(2729KB) ( 40 )
- References | Related Articles | Metrics
-
Aiming at the problems of large amount of calculation,large amount of parameters,slow detection speed and low detection accuracy of existing defect detection algorithms,this paper proposes a high-precision and real-time detection algorithm for photovoltaic glass edge defects based on YOLOv5.Firstly,a newly designed dense connection block(New_DBlock(C)) based on cheap operation and feature reuse is used to replace the C3 block of YOLOv5’s feature extraction network,which reduced the calculation amount and parameter amount of the whole algorithm.Secondly,the C2f_SE block fused with the channel attention mechanism SE(Squeeze-and-Excitation) is used to replace the C3 block of the YOLOv5’s feature fusion network to improve the detection speed and detection accuracy.Finally,the improved YOLOv8’s decoupling detection head is used to replace the coupling detection head of YOLOv5 to improve the positioning accuracy and classification accuracy of the algorithm.The experimental results show that the improved algorithm mAP@0.5 is increased by 1.0%,mAP@0.5:0.95 is increased by 3.1%,the amount of calculation is decreased by 48.1%,the amount of parameters is decreased by 56.7%,and the detection speed is increased by 18.5%.Compared with other mainstream YOLO and R-CNN series algorithms,the improved algorithm also has higher detection accuracy,detection speed,and lower amount of calculation and parameters,which is suitable for the real-time detection of photovoltaic glass edge defects.
-
FOD Segmentation Method Based on Dual-channel Sparrow Search Algorithm-enhanced OTSU
费春国, 陈世洪. 基于双通道麻雀改进OTSU的FOD分割方法[J]. 计算机科学, 2025, 52(6A): 240700089-7.
FEI Chunguo, CHEN Shihong. FOD Segmentation Method Based on Dual-channel Sparrow Search Algorithm-enhanced OTSU[J]. Computer Science, 2025, 52(6A): 240700089-7. - FEI Chunguo, CHEN Shihong
- Computer Science. 2025, 52 (6A): 240700089-7. doi:10.11896/jsjkx.240700089
-
Abstract
PDF(4986KB) ( 39 )
- References | Related Articles | Metrics
-
In the method of segmenting foreign object debris(FOD) on airport runways based on image processing,the segmentation based on deep learning cannot accurately perceive untrained FOD.Therefore,a segmentation method is proposed in this paper,based on the improved Otsu method of dual-channel sparrow(DS-OTSU).In this segmentation method,the sparrow search algorithm is combined with OTSU,the optimal point set is added to the sparrow search algorithm to optimize the initial population,and the perturbations in the positive and negative directions are added to the dual channels to change the calculation method of the objective function of the sparrow search algorithm.Experimental analysis shows that the detection accuracy and convergence speed of the proposed method are superior to those of other methods.
-
Ships Detection in Remote Sensing Images Based on Improved PPYOLOE-R
陈天鹏, 胡建文, 李海涛. 基于改进PPYOLOE-R的遥感图像舰船目标检测[J]. 计算机科学, 2025, 52(6A): 240600118-8.
CHEN Tianpeng, HU Jianwen, LI Haitao. Ships Detection in Remote Sensing Images Based on Improved PPYOLOE-R[J]. Computer Science, 2025, 52(6A): 240600118-8. - CHEN Tianpeng, HU Jianwen, LI Haitao
- Computer Science. 2025, 52 (6A): 240600118-8. doi:10.11896/jsjkx.240600118
-
Abstract
PDF(6208KB) ( 44 )
- References | Related Articles | Metrics
-
The background of the remote sensing image is complex,the ships in remote sensing image are similar to the harbor background.Additionally,small and densely arranged ship objects pose challenges for existing deep learning detection algorithms,leading to issues like missed detection,incorrectdetection,and poor accuracy.This paper proposes an improved PPYOLOE-R object detection algorithm for ships in remote sensing images.In this paper,PPYOLOE-R is used as the baseline,and a shuffle attention is introduced in the neck network to enhance the feature extraction capability of the model.This paper introduces an improved Focal Loss,which can combines the category score with the localization score,softens the category labels,and improves the ability to distinguish between difficult and easy samples of model.The ship categoryin the DOTA dataset is extracted to produce the DOTA_ships dataset.Experimental results on the HRSC2016 dataset and DOTA_ships dataset show that the proposed method achieves an average precision of 90.02% and 89.90%,detection speeds of 48.2FPS and 41.5FPS,and recalls of 97.9% and 97.3%,respectively.The average precision and recall are optimal among the compared methods,and the detection speed is second to the PPYOLOE-R.
-
Flame Image Enhancement with Few Samples Based on Style Weight Modulation Technique
李明杰, 胡羿, 易正明. 基于样式权重调制技术的少样本火焰图像增强[J]. 计算机科学, 2025, 52(6A): 240500129-7.
LI Mingjie, HU Yi, YI Zhengming. Flame Image Enhancement with Few Samples Based on Style Weight Modulation Technique[J]. Computer Science, 2025, 52(6A): 240500129-7. - LI Mingjie, HU Yi, YI Zhengming
- Computer Science. 2025, 52 (6A): 240500129-7. doi:10.11896/jsjkx.240500129
-
Abstract
PDF(4512KB) ( 37 )
- References | Related Articles | Metrics
-
The few samples generation technology relies solely on scarce and limited target samples to generate images that are both fake and diverse,which can build reliable datasets for downstream target recognition tasks.In this paper,we propose a few sample generation model based on weight modulation,which can obtain images with the same content and diverse feature representations as the target samples under the condition of only inputting three target images.Specifically,we have carefully designed the encoder and decoder in the generator,using a C2F structure with better gradient flow to build a pyramid network architecture,maximizing the restoration of the original features of the image at different levels.We adopt a feature fusion method based on attention mechanism and introduced feature style latent codes to control the quality of feature fusion.Among them,the style latent code uses a weight scaling strategy,effectively eliminating generated artifacts and making the generated images more realistic.At the same time,we also use an optimized feature length detection algorithm to detect the proximity of important information in the source and target domains.This technique enables the model to better transfer the prior information obtained through pre-training in the source domain to the target domain.For the task of generating flame image samples,we provide qualitative and quantitative comparison results.The proposed model can effectively improve the flame target recognition performance under the YOLOv8 algorithm and substantially enhance the data augmentation effect.
-
Study on interpretable Shallow Class Activation Mapping Algorithm Based on Spatial Weights andInter Layer Correlation
程艳, 何慧娟, 陈彦滢, 姚楠楠, 林国波. 基于空间权重和层间相关性的可解释浅层类激活映射算法研究[J]. 计算机科学, 2025, 52(6A): 240500140-7.
CHENG Yan, HE Huijuan, CHEN Yanying, YAO Nannan, LIN Guobo. Study on interpretable Shallow Class Activation Mapping Algorithm Based on Spatial Weights andInter Layer Correlation[J]. Computer Science, 2025, 52(6A): 240500140-7. - CHENG Yan, HE Huijuan, CHEN Yanying, YAO Nannan, LIN Guobo
- Computer Science. 2025, 52 (6A): 240500140-7. doi:10.11896/jsjkx.240500140
-
Abstract
PDF(3996KB) ( 40 )
- References | Related Articles | Metrics
-
Convolutional neural networks play an important role in the field of computer vision,but their black box nature makes it difficult for people to understand the reasons for their decisions,seriously hindering their application in certain security areas.Traditional class activation mapping(CAM) algorithms are often limited by the interpretability of deep neurons,resulting in weaker interpretability of shallow neurons and the presence of significant noise.To address this challenge,we propose an interpretable shallow class activation mapping algorithm that can generate fine-grained explanations.This algorithm is based on the theory of correlation propagation,considering the correlation between adjacent layers,obtaining inter layer correlation weights,and using the feature map with spatial weights as a mask,multiplying it with inter layer correlation weights to achieve shallow interpretation.Experimental results show that compared with LayerCAM,which explains the shallow layer best,the proposed algorithm improves the comprehensive score of deletion and insertion tests for the class activation maps generated by each layer of the con-volutional neural network by a maximum of 2.73 and a minimum of 0.24 on the ILSVRC2012 val dataset,and a maximum of 1.31 and a minimum of 0.38 on the CUB-200-2011 dataset.
-
Small Target Detection Algorithm in UAV Images Integrating Multi-scale Features
黄红, 苏菡, 闵鹏. 融合多尺度特征的无人机图像中小目标检测算法[J]. 计算机科学, 2025, 52(6A): 240700097-5.
HUANG Hong, SU Han, MIN Peng. Small Target Detection Algorithm in UAV Images Integrating Multi-scale Features[J]. Computer Science, 2025, 52(6A): 240700097-5. - HUANG Hong, SU Han, MIN Peng
- Computer Science. 2025, 52 (6A): 240700097-5. doi:10.11896/jsjkx.240700097
-
Abstract
PDF(2856KB) ( 41 )
- References | Related Articles | Metrics
-
To address the issues of missed and false detections caused by overly dense distributions of small objects in the task of small object detection in drone aerial imagery,which can lead to mutual occlusion,a lightweight object detection method with multi-scale feature fusion is proposed.Firstly,a multi-scale occlusion module is introduced to enhance the network’s multi-scale information extraction capability,reduce semantic differences between different scales,and improve detection performance for occluded small objects.Secondly,a more efficient shared detection head strategy is proposed,which shares feature information of different scales through shared convolution across different detection heads,significantly reducing the model’s parameter count and achieving model lightweighting.Finally,a softened non-maximum suppression(NMS) method is introduced to solve the pro-blem of missed and false detections in dense occlusion scenarios caused by traditional greedy NMS,further improving detection accuracy.The effectiveness of the improved model is evaluated on the Visdrone-2019 and RSOD datasets,with the mean average precision(mAP) of the improved model increases by 9.0% and 6.0% respectively compared to the baseline model,and the model parameters reduce by 12.6%.Experimental results show that the improved algorithm can enhance the accuracy of object detection in drone aerial imagery while ensuring lightweighting,helping drone systems to identify and track targets more accurately,thereby improving the reliability and efficiency of task execution.
-
Optimized Volume Rendering with Multi-resolution Chebyshev Distance Maps
梁志文, 巴图斯仁, 冯雪. 使用多分辨率切比雪夫距离图优化体渲染[J]. 计算机科学, 2025, 52(6A): 240700148-7.
LIANG Zhiwen, BATU Siren, FENG Xue. Optimized Volume Rendering with Multi-resolution Chebyshev Distance Maps[J]. Computer Science, 2025, 52(6A): 240700148-7. - LIANG Zhiwen, BATU Siren, FENG Xue
- Computer Science. 2025, 52 (6A): 240700148-7. doi:10.11896/jsjkx.240700148
-
Abstract
PDF(2506KB) ( 38 )
- References | Related Articles | Metrics
-
Volume rendering has numerous applications in medical visualization.However,its computational complexity is significantly higher than that ofsurface rendering,making it chauenging to achieve real-time rendering.A new empty space skipping method is proposed to accelerate the volume rendering process and reduce the space occupied during rendering.This method builds upon the state-of-the-art empty space skipping method(Chebyshev distance empty space skipping).It enhances the process of the ray casting algorithm,skips empty blocks using a low-resolution distance map and skips smaller empty blocks inside the effective block using a high-resolution distance map.The high-resolution distance map is stored exclusively within the effective block,effectively mitigating the increased memory overhead associated with introducing the high-resolution distance map.More-over,the space-saving effect is more pronounced in the context of sparser volume data.The performance metrics of the novel method is compared with Chebyshev distance empty space skipping,and the results indicate that the proposed method combines the advantages of Chebyshev distance map acceleration,resulting in significant improvements in rendering frame rate and space occupied during volume rendering with sparse data.
-
Sand Dust Image Enhancement Method Based on Multi-cascaded Attention Interaction
王荣, 邹淑平, 郝鹏飞, 郭佳伟, 舒鹏. 多级联注意力交互的沙尘图像增强方法[J]. 计算机科学, 2025, 52(6A): 240800048-7.
WANG Rong , ZOU Shuping, HAO Pengfei, GUO Jiawei, SHU Peng. Sand Dust Image Enhancement Method Based on Multi-cascaded Attention Interaction[J]. Computer Science, 2025, 52(6A): 240800048-7. - WANG Rong , ZOU Shuping, HAO Pengfei, GUO Jiawei, SHU Peng
- Computer Science. 2025, 52 (6A): 240800048-7. doi:10.11896/jsjkx.240800048
-
Abstract
PDF(3878KB) ( 49 )
- References | Related Articles | Metrics
-
Due to the scattering and absorption of light by suspended particles,images captured in sand and dust conditions often suffer from a yellowish color bias and low contrast.Existing image enhancement algorithms for degraded images are mostly aimed at dehazing and deraining,making them difficult to effectively process sand dust images,thereby indicating a substantial potential for advancement in this field.The lack of large-scale datasets further complicates the task,hindering the ability of neural networks to robustly enhance sand dust images.To address this problem,a sand dust image enhancement method based on multi-cascaded attention interaction is proposed.In addition,we construct a new sand dust image dataset by combining the atmospheric scattering model and depth information.The method extracts multi-scale feature maps through an end-to-end U-Net model,fuses multi-scale feature maps using a multi-level channel attention interaction module,and enhances and restores detail information using a multi-scale convolution module.Experimental results show that the proposed method can effectively remove sand and dust in images and restore details,and achieves the best performance in terms of PSNR,SSIM,and IPLPS indices on the proposed dataset.
-
Pedestrian Re-identification Based on Spatial Transformation and Multi-scale Feature Fusion
金鹭, 刘敏昆, 张春红, 陈可飞, 罗压琼, 李博. 基于空间转换与多尺度特征融合的行人重识别方法[J]. 计算机科学, 2025, 52(6A): 240800156-7.
JIN Lu, LIU Mingkun, ZHANG Chunhong, CHEN Kefei, LUO Yaqiong, LI Bo. Pedestrian Re-identification Based on Spatial Transformation and Multi-scale Feature Fusion[J]. Computer Science, 2025, 52(6A): 240800156-7. - JIN Lu, LIU Mingkun, ZHANG Chunhong, CHEN Kefei, LUO Yaqiong, LI Bo
- Computer Science. 2025, 52 (6A): 240800156-7. doi:10.11896/jsjkx.240800156
-
Abstract
PDF(3866KB) ( 39 )
- References | Related Articles | Metrics
-
A network combining spatial transformation and multiscale feature fusion is designed to address the issue of insufficiently representing pedestrian information due to misalignment of pedestrian spatial characteristics and occlusion factors.Firstly,a method for enhancing pedestrian retrieval is proposed,aiming to improve the network’s ability to recognize special samples.Secondly,a self-attention spatial transformation network is introduced to address the problem of inconsistent spatial semantic information in pedestrian image regions.Then,different scale features are extracted from the network,and fused separately based on the characteristics of each branch,incorporating coordinate attention and instance batch normalization.Finally,the features of different branches are fused to obtain highly representative fused features.Experiments on multiple datasets show that the proposed method outperforms existing methods in terms of re-identification performance.
-
Object Detection-based Method for Guiding Passenger Flow in Boarding and Deparking Areas ofRail Transit
乐凌志, 翟江涛, 俞铭, 孙同庆. 一种基于目标检测的轨道交通上下客区客流指引方法[J]. 计算机科学, 2025, 52(6A): 240400192-9.
LE Lingzhi, ZHAI Jiangtao, YU Ming, SUN Tongqing. Object Detection-based Method for Guiding Passenger Flow in Boarding and Deparking Areas ofRail Transit[J]. Computer Science, 2025, 52(6A): 240400192-9. - LE Lingzhi, ZHAI Jiangtao, YU Ming, SUN Tongqing
- Computer Science. 2025, 52 (6A): 240400192-9. doi:10.11896/jsjkx.240400192
-
Abstract
PDF(3145KB) ( 32 )
- References | Related Articles | Metrics
-
To address the situation where passengers occupy the alighting area while waiting in front of the platform screen doors,this paper proposes a passenger flow guidance method based on object detection.Firstly,an improved MCA-YOLOv5s network model is proposed by enhancing the shape features of passengers in front of the platform screen doors for object detection.Then the field of view angle and installation angle of the camera are calculated based on the mounting height of the intelligent door lintel system and the range of the alighting area in real scenes to ensure accurate division of the alighting and boarding areas in captured images.Subsequently,passenger density estimation is conducted for the alighting and boarding areas,and correspon-ding passenger flow distribution strategies are designed based on the estimated density values,with guidance provided through speakers on the intelligent door lintel terminal.Through testing in real scenarios,the effectiveness of this method in rapidly and accurately estimating passenger density is validated.
-
Water Segmentation Contour Post-processing Algorithm for New Energy Photovoltaic PowerStation Location
武星明, 党旗, 江波, 张悦超, 周继威, 王晓龙. 用于新能源光伏电站选址的水体分割轮廓后处理算法[J]. 计算机科学, 2025, 52(6A): 240700035-10.
WU Xingming, DANG Qi, JIANG Bo, ZHANG Yuechao, ZHOU Jiwei, WANG Xiaolong. Water Segmentation Contour Post-processing Algorithm for New Energy Photovoltaic PowerStation Location[J]. Computer Science, 2025, 52(6A): 240700035-10. - WU Xingming, DANG Qi, JIANG Bo, ZHANG Yuechao, ZHOU Jiwei, WANG Xiaolong
- Computer Science. 2025, 52 (6A): 240700035-10. doi:10.11896/jsjkx.240700035
-
Abstract
PDF(6138KB) ( 37 )
- References | Related Articles | Metrics
-
In the process of site selection of new energy photovoltaic power stations,it is an indispensable step to analyze the distribution of water areas using unmanned aerial vehicle(UAV) images.The water body in water image is usually segmented by water body segmentation algorithm.However,the improvement of the model structure or the increase of the single scene training dataset is only suitable for the performance improvement of the corresponding data set scene for the semantic segmentation neural network,and it is difficult to ensure the accuracy of the water boundary segmentation in the open scene.To solve this problem,two contour post-processing algorithms for water segmentation neural networks are proposed.Compared with the most advanced technology,the continuous contour post-processing algorithm can effectively remove the abnormal small contour generated by the water body segmentation algorithm according to the contour features.At the same time,for the case of intermittent water body boundary generated by complex images,the intermittent contour processing algorithm realizes the water body boundary completion through point set rearrangement.Both post-processing algorithms can improve the segmentation accuracy.PIDNet,EGE-UNet,BiSeNetv2 and Fast-SCNN are used as experimental models.The results show that the pixel accuracy(PA) and average intersection ratio(mIoU) of the experimental model after continuous contour processing are improved in the water boundary line detection task,with an average increment of 2.85 % and 2.71 %,respectively.The average F1 index increased by 2.62 % after discontinuous contour processing.
-
Continuous Sign Language Recognition Based on Graph Convolutional Network and CTC/Attention
边辉, 孟畅乾, 李子涵, 陈子豪, 谢雪雷. 基于图卷积网络和CTC/Attention的连续手语识别[J]. 计算机科学, 2025, 52(6A): 240400098-9.
BIAN Hui, MENG Changqian, LI Zihan, CHEN Zihaoand XIE Xuelei. Continuous Sign Language Recognition Based on Graph Convolutional Network and CTC/Attention[J]. Computer Science, 2025, 52(6A): 240400098-9. - BIAN Hui, MENG Changqian, LI Zihan, CHEN Zihaoand XIE Xuelei
- Computer Science. 2025, 52 (6A): 240400098-9. doi:10.11896/jsjkx.240400098
-
Abstract
PDF(3890KB) ( 38 )
- References | Related Articles | Metrics
-
Sign language is an important means of communication among people with hearing impairment.Through sign language recognition,patients can communicate with normal people without barriers.With the development of deep learning technology,various sign language recognition technologies have also developed,but the existing sign language recognition technologies often cannot complete the task of continuous sign language recognition.Therefore,this paper proposes a continuous sign language re-cognition method based on graph convolution network(GCN) and connectionist temporal classification of neural network classification/attention( CTC/Attention),which extracts features from the space dimension and time dimension,respectively.The mechanism of spatial attention is blended in among them,assigning weight given to bone point,thereby highlight the effective spatial characteristics and to realize continuous sign language recognition.This method can realize sequence alignment and contextual semantic modeling of continuous sign language sentence translation.Firstly,data of sign language action bone points are collected based on MediaPipe framework,and a dataset of skeletal key point in Chinese sign language is built based on this.A dynamic chiral word recognition method based on Spatio-Temporal graph convolutional network(ST-GCN) is designed.Finally,a method based on GCN and CTC/Attention code network is proposed to realize continuous sign language sentence recognition.In the case of limited datasets,the proposed method is evaluated on the self-built skeletal point dataset SSLD,the experimental results show that,the average continuous sign language recognition accuracy reaches 94.41%,and the model has been proved to have good sign language recognition ability.
-
FLIP-based Joint Similarity Preserving Hashing for Cross-modal Retrieval
唐立军, 杨政, 赵男, 翟苏巍. 基于FLIP与联合相似性保持的跨模态哈希检索[J]. 计算机科学, 2025, 52(6A): 240400151-10.
TANG Lijun , YANG Zheng, ZHAO Nan, ZHAI Suwei. FLIP-based Joint Similarity Preserving Hashing for Cross-modal Retrieval[J]. Computer Science, 2025, 52(6A): 240400151-10. - TANG Lijun , YANG Zheng, ZHAO Nan, ZHAI Suwei
- Computer Science. 2025, 52 (6A): 240400151-10. doi:10.11896/jsjkx.240400151
-
Abstract
PDF(6671KB) ( 52 )
- References | Related Articles | Metrics
-
Recently,supervised cross-modal retrieval techniques have garnered significant attention.Based on sample-level semantic relationships,existing methods primarily focus on assessing the sample-wise similarity while neglecting the potential impact of label distribution on improving retrieval performance.Furthermore,existing approaches still face challenges related to inaccurate feature extraction and sluggish processing rates.To address this problems,we introduce a new method,termed FLIP-based joint similarity preserving hashing(FJSPH),for cross-modal retrieval.Specifically,we leverage the fast language image pre-training model(FLIP) to extract more accurate cross-modal features.To further reduce the cross-modal semantic differences,we attempt to enhance modal interaction and refine modal semantic representation through multimodal comparative learning.In addition,we use sample-wise similarity and cluster-wise similarity to further exploit the semantic correlation between different modalities.This approach ensures that samples sharing similar semantics are positioned closer together in Hamming space,thereby producing more distinctive hash codes.The experimental results on three cross-modal datasets indicate that the FJSPH approach exhibits excellent retrieval performance in cross-modal retrieval.
-
Automation and Security Strategies and Empirical Research on Operation and Maintenance of Digital Government Database
王昀, 赵剑明, 郭毅峰, 周欢欢, 周武爱, 张皖哲, 冯建华. 数字政府数据库运维的自动化与安全策略及实证研究[J]. 计算机科学, 2025, 52(6A): 240500045-8.
WANG Yun, ZHAO Jianming, GUO Yifeng, ZHOU Huanhuan, ZHOU Wuai, ZHANG Wanzhe, FENG Jianhua. Automation and Security Strategies and Empirical Research on Operation and Maintenance of Digital Government Database[J]. Computer Science, 2025, 52(6A): 240500045-8. - WANG Yun, ZHAO Jianming, GUO Yifeng, ZHOU Huanhuan, ZHOU Wuai, ZHANG Wanzhe, FENG Jianhua
- Computer Science. 2025, 52 (6A): 240500045-8. doi:10.11896/jsjkx.240500045
-
Abstract
PDF(2484KB) ( 52 )
- References | Related Articles | Metrics
-
As the continuous deepening of government digital transformation,the construction of digital government has ushered in a new wave of high tide.However,it also brings a lot of challenges and problems.Especially in terms of database maintenance,it brings many challenges and problems,such as large number of databases,variety of types,insufficient construction of data security standards,frequent network attacks,high operation and maintenance cost,and difficulty in performance optimization.This paper proposes a new strategy framework based on automation technology and security optimization to cope with this situation,and carries out empirical research on related theories.This framework integrates technologies such as automation,cloud computing,and artificial intelligence,providing a comprehensive solution that includes automatic inspection and monitoring,automatic tuning,data backup and recovery,high availability management,automatic error repair,security optimization management,performance capacity management,SQL audit management,etc.In addition,empirical research conducted in digital government projects in Gansu and Heilongjiang provinces proves that this kind of framework can effectively improve operation and maintenance efficiency,accumulating valuable experience in the operation and maintenance of digital government databases.
-
Research on Fusion Optimization Method of Heterogeneous Data Dictionary in Grass-roots SocialGrid Governance
王庆, 杨万哲, 张聪. 基层社会网格治理异构数据字典融合优化方法研究[J]. 计算机科学, 2025, 52(6A): 240400074-7.
WANG Qing, YANG Wanzhe, ZHANG Cong. Research on Fusion Optimization Method of Heterogeneous Data Dictionary in Grass-roots SocialGrid Governance[J]. Computer Science, 2025, 52(6A): 240400074-7. - WANG Qing, YANG Wanzhe, ZHANG Cong
- Computer Science. 2025, 52 (6A): 240400074-7. doi:10.11896/jsjkx.240400074
-
Abstract
PDF(2563KB) ( 47 )
- References | Related Articles | Metrics
-
Data dictionary(DD) is an important part of the database system design content,and it is a collection of data lists that describes the attributes,composition and structure of the data in the database.In the development process of some general-purpose information systems,designers and developers often encounter the problem of how to integrate and optimize existing heterogeneous data dictionaries.Due to the lack of industry data standards or business scope limitations,these existing data dictionaries differ significantly in data representation definition,data composition and structure design,but their data content is highly convergable.It takes a lot of time and resources to manually maintain a converged data dictionary.Based on the business background of grass-roots social grid governance,this paper aims at the pain points of heterogeneous data dictionary fusion in the development of grass-roots social governance promotion digital application,and studies the optimization methods and related technologies of he-terogeneous data dictionary fusion.The methods and techniques of data dictionary fusion are designed,which consider the completeness of data information and the integrity of data structure,such as semantic deduplication and disambiguation,keyword extraction,similarity calculation and table structure fusion.Based on the experimental verification of data dictionary fusion optimization of grass-roots social grid governance business,the fusion efficiency and effect are significantly improved compared with the traditional data dictionary fusion method.
-
Study on Improvements of RippleNet Model Based on Representation Enhancement
李鹏彦, 王宝会, 叶子豪. 基于表示增强的RippleNet模型改进研究[J]. 计算机科学, 2025, 52(6A): 240800142-9.
LI Pengyan, WANG Baohui, YE Zihao. Study on Improvements of RippleNet Model Based on Representation Enhancement[J]. Computer Science, 2025, 52(6A): 240800142-9. - LI Pengyan, WANG Baohui, YE Zihao
- Computer Science. 2025, 52 (6A): 240800142-9. doi:10.11896/jsjkx.240800142
-
Abstract
PDF(2342KB) ( 41 )
- References | Related Articles | Metrics
-
As the volume of internet information grows exponentially,recommender systems play a crucial role in addressing information overload.In response to the deficiencies in entity and relation representations within existing recommendation systems,this paper proposes an enhanced model termed representation enhanced ripplenet(RE-RippleNet).On one hand,traditional models tend to overlook the semantic information inherent in relationships.By aggregating neighboring entities and relationships into the embedded representation of entities,the expressive power of entity embedding and the accuracy of user representation are improved.On the other hand,during the aggregation process of multi-hop ripple sets for propagating user preferences,a long short-term memory(LSTM) network is employed to capture the diverse influences and characteristics of user preference representations across different hops,facilitating a deeper exploration of user preferences and more precise recommendations.Click-through rate prediction experiments on two public datasets,MovieLens-1M and Book-Crossing,demonstrate that RE-RippleNet achieves significant improvements in accuracy(ACC) and AUC metrics,compared to the baseline RippleNet model.Specifically,ACC and AUC increases by 1.7% and 1.2% respectively on the MovieLens-1M dataset,and by 3.6% and 1.6% on the Book-Crossing dataset,validating the model’s effectiveness in enhancing recommender system performance.
-
Study on Trajectory Prediction Model Algorithm of Missing Submersible Based on DifferentialEquation
杨镇宇, 戢晓峰, 马武彬, 吴亚辉. 基于微分方程的失联潜水器轨迹预测模型算法研究[J]. 计算机科学, 2025, 52(6A): 240900071-9.
YANG Zhenyu, JI Xiaofeng, MA Wubin, WU Yahui. Study on Trajectory Prediction Model Algorithm of Missing Submersible Based on DifferentialEquation[J]. Computer Science, 2025, 52(6A): 240900071-9. - YANG Zhenyu, JI Xiaofeng, MA Wubin, WU Yahui
- Computer Science. 2025, 52 (6A): 240900071-9. doi:10.11896/jsjkx.240900071
-
Abstract
PDF(8127KB) ( 45 )
- References | Related Articles | Metrics
-
Deep-sea exploration submersible will face a severe test after failure and loss of contact,it is very important to predict its location and rescue it in time.However,the deep-sea submersible is affected by many uncertain factors such as ocean current,seawater salinity and seabed topography,so it is very difficult to accurately predict its position.To solve this problem,the traditional single factor analysis method has great defects,and it is difficult to accurately describe the motion mode of submersible in complex marine environment.The coupling analysis of uncertain multiple factors in the deep-sea environment is carried out,a multi-factor coupled motion model of missing submersible is established based on differential equation,and the motion trajectory of submersible under the influence of eight kinds of terrain and other factors is classified and simulated and visually displayed,which provides decision support for the positioning and searching of deep-sea exploration submersible.Experimental results show that the model can accurately determine the position of submersible in complex marine environment with high accuracy,which is better than other baseline algorithms.
-
Study on Performance Analysis Methods for Page Faults Based on System Tracing and Pattern Mining
魏书文, 王宝会. 基于系统追踪与模式挖掘的缺页异常性能分析方法研究[J]. 计算机科学, 2025, 52(6A): 240800023-6.
WEI Shuwen, WANG Baohui. Study on Performance Analysis Methods for Page Faults Based on System Tracing and Pattern Mining[J]. Computer Science, 2025, 52(6A): 240800023-6. - WEI Shuwen, WANG Baohui
- Computer Science. 2025, 52 (6A): 240800023-6. doi:10.11896/jsjkx.240800023
-
Abstract
PDF(2774KB) ( 38 )
- References | Related Articles | Metrics
-
Page fault handling is a core module in the memory management subsystem of the Linux kernel. For memory-intensive applications such as databases, the process efficiency will directly impact the overall system performance and often becomes a performance bottleneck. There has been extensive research on optimizing memory usage and reducing the occurrence of page faults. However, the handling patterns of page faults and their latency distribution in real-world environments have received little attention. This paper proposes a method based on execution tracing to analyze the handling patterns and latency distribution of page faults. With this method, the handling patterns and latency distribution of page faults under two classic memory-intensive workloads is studied, which providing important references for optimizing memory-intensive applications.
-
Prediction of Influenza A Antigenicity Based on Few-shot Contrastive Learning
李江辉, 丁海燕, 李维华. 基于小样本对比学习的甲型流感抗原性预测[J]. 计算机科学, 2025, 52(6A): 240800053-6.
LI Jianghui, DING Haiyan, LI Weihua. Prediction of Influenza A Antigenicity Based on Few-shot Contrastive Learning[J]. Computer Science, 2025, 52(6A): 240800053-6. - LI Jianghui, DING Haiyan, LI Weihua
- Computer Science. 2025, 52 (6A): 240800053-6. doi:10.11896/jsjkx.240800053
-
Abstract
PDF(3017KB) ( 38 )
- References | Related Articles | Metrics
-
Influenza viruses undergo a series of genetic mutations under selective pressure,leading to antigenic variation,immune evasion,and enhanced adaptability,which reduces the effectiveness of existing vaccines and antiviral drugs. Timely identification of antigenic differences between viral strains is crucial to the prevention and control of influenza viruses and the development of vaccines. Due to the low throughput of traditional serological methods,the available data samples are often limited,making it difficult for existing deep learning-based antigenicity prediction models to effectively extract antigenic features from hemagglutinin protein sequences. Therefore,this paper proposes an antigenicity prediction method enhanced by convolutional neural networks and contrastive learning. By comparing the genetic sequences and antigenicity labels of original strains,the model directly extracts antigenic representation differences and visualizes these antigenic differences. Experiments are conducted on datasets of three subtypes:A/H1N1,A/H3N2,and A/H5N1. The results show that the proposed model improves the accuracy and generalization ability of antigenicity prediction,providing support for the monitoring of influenza viruses and vaccine development.
-
Research and Implementation of Mine Gas Concentration Prediction Algorithm Based on Deep Learning
王宝会, 高瞻, 徐林, 谭英洁. 基于深度学习的矿井瓦斯浓度预测算法研究与实现[J]. 计算机科学, 2025, 52(6A): 240400188-7.
WANG Baohui, GAO Zhan, XU Lin, TAN Yingjie. Research and Implementation of Mine Gas Concentration Prediction Algorithm Based on Deep Learning[J]. Computer Science, 2025, 52(6A): 240400188-7. - WANG Baohui, GAO Zhan, XU Lin, TAN Yingjie
- Computer Science. 2025, 52 (6A): 240400188-7. doi:10.11896/jsjkx.240400188
-
Abstract
PDF(2565KB) ( 39 )
- References | Related Articles | Metrics
-
Currently,the traditional prediction algorithms for gas concentration both domestically and internationally primarily rely on ARIMA and SVM models.With the rapid development of deep learning technology and the rise of neural networks,the la-test gas concentration prediction is conducted through recurrent neural network(RNN) models.Due to their nonlinear characteris-tics and consideration of data connections,RNNs have further improved the prediction performance compared to traditional prediction algorithms.However,as the length of the sample sequence increases,the prediction ability decreases due to inherent flaws in the model.In response to this issue,the paper proposes a novel gas concentration prediction model.This model combines con-volutional neural networks (CNNs) with RNNs and incorporates an attention mechanism to enhance the expressive power between data.Through testing using actual data from the 1209 working face of Zhongxing Coal Industry in Shanxi Fenxi Mining Group,the average relative error predicted by the traditional RNN model is 0.42 1,while the average relative error predicted by the proposed model is 0.029 3.The experiment demonstrates that the proposed algorithm achieves better prediction performance compared to traditional gas concentration prediction algorithms.
-
Ensemble Learning Model for Stock Manipulation Detection Based on Multi-scale Data
刘成明, 李海霞, 李韶川, 李英豪. 基于多尺度数据的股票操纵检测集成模型[J]. 计算机科学, 2025, 52(6A): 240700108-8.
LIU Chengming, LI Haixia, LI Shaochuan, LI Yinghao. Ensemble Learning Model for Stock Manipulation Detection Based on Multi-scale Data[J]. Computer Science, 2025, 52(6A): 240700108-8. - LIU Chengming, LI Haixia, LI Shaochuan, LI Yinghao
- Computer Science. 2025, 52 (6A): 240700108-8. doi:10.11896/jsjkx.240700108
-
Abstract
PDF(1806KB) ( 37 )
- References | Related Articles | Metrics
-
The stock market is a significant part of the financial system in China,and its stability of is crucial for overall financial stability.Stock price mani-pulation has long been a topic of widespread concern within it.Existing researches on stock price manipulation detection models are often based on either daily or intraday trading data.However,stock manipulation could have both short-term and long-term impacts,and a singular temporal focus may not comprehensively capture the pattern characteristics of stock manipulation.This paper proposes an ensemble learning model for stock manipulation detection with multi-scale data.The model ensembles sub-models utilizing minute-level and day-level trading data,enhancing the capability to identify trade-based stock manipulation behavior.Comparative experiments show that the model type proposed in this paper,which uses multi-scale data,has a large improvement in various metrics such as AUC,accuracy,recall,and precision.
-
Adaptive Differential Evolution Based on Self-guided Perturbation and Extreme DimensionExchange
翟雪玉, 杨卫中. 自扰动和极性维度交互的自适应差分进化算法[J]. 计算机科学, 2025, 52(6A): 240800100-14.
ZHAI Xueyu, YANG Weizhong. Adaptive Differential Evolution Based on Self-guided Perturbation and Extreme DimensionExchange[J]. Computer Science, 2025, 52(6A): 240800100-14. - ZHAI Xueyu, YANG Weizhong
- Computer Science. 2025, 52 (6A): 240800100-14. doi:10.11896/jsjkx.240800100
-
Abstract
PDF(6201KB) ( 37 )
- References | Related Articles | Metrics
-
Aiming at the defects of differential evolution algorithm,such as loss of population diversity and premature convergence when dealing with multimodal complex optimization problems,a differential evolution based on adaptive parameter control and self-guided perturbation(APE-DE)is proposed.First,it designs a self-guided perturbation compensating scheme to guide its search direction by considering the individual’s spatial position,effectively avoiding the dilemma of falling into the local optimum.Second,the algorithm also develops an extreme dimension exchange strategy,which evaluates population diversity from multiple dimensions and implements related different diversity enhancement schemes.Finally,the algorithm proposes an adaptive parameter control strategy that combines information from wavelet basis functions and fitness distribution deviations to capture the dynamic changes in population fitness in real time and adjust the algorithm parameters accordingly.To verify the performance of APE-DE,experiments are conducted on the widely used IEEE CEC2017 data set to validate the effectiveness of the algorithm in multimodal and complex environments.Experimental results show that compared with eight advanced differential evolution variants,APE-DE exhibits significant advantages in both convergence accuracy and convergence speed.Furthermore,to evaluate the effectiveness of APE-DE in solving real-world problems,the proposed algorithm is applied to the parameter identification problem of photovoltaic models.
-
Local Linear Embedding Algorithm Based on Probability Model and Information Entropy
刘远红, 毋毓斌. 基于概率模型与信息熵的局部线性嵌入算法[J]. 计算机科学, 2025, 52(6A): 240500021-8.
LIU Yuanhong, WU Yubin. Local Linear Embedding Algorithm Based on Probability Model and Information Entropy[J]. Computer Science, 2025, 52(6A): 240500021-8. - LIU Yuanhong, WU Yubin
- Computer Science. 2025, 52 (6A): 240500021-8. doi:10.11896/jsjkx.240500021
-
Abstract
PDF(3861KB) ( 43 )
- References | Related Articles | Metrics
-
The local linear embedding algorithm uses Euclidean distance to select neighborhood points,which usually loses the nonlinear features of the dataset itself,resulting in incorrect selection of neighborhood points,and only using Euclidean distance to construct weights leads to insufficient information mining.To address the above issues,a local linear embedding algorithm based on probability model and information entropy(PIE-LLE algorithm) is proposed.Firstly,in order to make the selection of neighborhood points more reasonable,from the perspective of the probability distribution of the dataset,the probability distribution of the sample points and their neighborhoods is considered,and a neighborhood set that conforms to the local distribution is constructed for the sample points.Secondly,in order to fully extract the local structural information of the samples,in the weight construction stage,the probability of the neighborhood to which the samples belong and the information entropy of each sample are calculated separately,and the two information are fused to reconstruct the low dimensional samples.Finally,experiments on two bearing fault datasets show that the highest accuracy of fault identification reaches 100%,higher than that of other comparative algorithms.Within the range of 5~15 neighborhood points,the PIE-LLE algorithm exhibits good low dimensional visualization performance.In the parameter sensitivity experiment,the proposed algorithm can maintain a relatively large Fisher index,effectively improving the classification accuracy and stability of the algorithm.
-
Factor Query Language-Basic Language of Factor Database
孟祥福, 李子函, 史家晟, 郭建威, 赵亮, 郭嗣琮, 汪培庄. 因素查询语言(FQL)-因素数据库的基本语言[J]. 计算机科学, 2025, 52(6A): 240600027-8.
MENG Xiangfu, LI Zihan, SHI Jiasheng, GUO Jianwei, ZHAO Liang, GUO Sicong, WANG Peizhuang. Factor Query Language-Basic Language of Factor Database[J]. Computer Science, 2025, 52(6A): 240600027-8. - MENG Xiangfu, LI Zihan, SHI Jiasheng, GUO Jianwei, ZHAO Liang, GUO Sicong, WANG Peizhuang
- Computer Science. 2025, 52 (6A): 240600027-8. doi:10.11896/jsjkx.240600027
-
Abstract
PDF(3134KB) ( 38 )
- References | Related Articles | Metrics
-
To support the basic storage and efficient processing of factor pedigree data in the context of factor space theory,a factor query language and factor base management system architecture are proposed.Firstly,the concept of factor pedigree is introduced and the storage method for factor pedigree based on XML specification is provided.Subsequently,the basic operation specification for the design of the factor query language,including addition,deletion,modification,and retrieval,are set out.To improve the efficiency of factor query,while enhancing data update speed and reducing the cost of index update,factor coding strategies based on interval,prime numbers,and binary strings are further proposed in accordance with the characteristics of the factor pedigree.Finally,the system architecture and functional modules of the factor base management system are designed accordingly,serving as the operational carrier for the factor query language.Factor query language and factor base management system are the system platform for the implementation of factor space theory.This paper makes a preliminary discussion on this aspect,and provides basic ideas and solutions for the development and application of factor base management system.
-
Next Point of Interest Recommendation Incorporating Dynamic Social Relationships
蒋昊伦, 朱金侠, 孟祥福. 融合动态社会关系的下一个兴趣点推荐[J]. 计算机科学, 2025, 52(6A): 240600003-7.
JIANG Haolun, ZHU Jinxia, MENG Xiangfu. Next Point of Interest Recommendation Incorporating Dynamic Social Relationships[J]. Computer Science, 2025, 52(6A): 240600003-7. - JIANG Haolun, ZHU Jinxia, MENG Xiangfu
- Computer Science. 2025, 52 (6A): 240600003-7. doi:10.11896/jsjkx.240600003
-
Abstract
PDF(3240KB) ( 39 )
- References | Related Articles | Metrics
-
The existing research on the next point-of-interest(POI) recommendation focuses on optimizing the accuracy and practicability of recommendation models by integrating user preferences,sequential behaviors,and spatio-temporal contextual information.However,current recommendation strategies still face two major challenges:1) the dynamic nature of user interests;2) the influence of users' social relations on their decision-making.To address these issues,a next POI recommendation model that integrates dynamic social relations is proposed.This model utilizes a self-attention network to simulate the dynamic changes in user preferences and performs integrated modeling of sequential information,spatio-temporal information,and dynamic social relations.Additionally,two parallel long-term and short-term channels are designed to capture users’ dynamic preferences and context-related dynamic social relations respectively.Through a multi-head self-attention mechanism,it effectively models the long-dependency relationship between any two historical check-in behaviors of users,adaptively allocating contribution values to the next POI.Finally,an attention mechanism is employed in the model's prediction layer to weigh the impact of users’ long-term and short-term preferences as well as their inherent interest in POI on their decision-making.Experiments on real-world and publicly available datasets from Gowalla and Brightkite demonstrate that the proposed model outperforms current next POI recommendation algorithms.
-
Click-through Rate Prediction Model Based on Feature Embedding Gating and PolynomialFeature Crossover Networks
栾方军, 张凤强, 袁帅. 基于特征嵌入门控和多项式特征交叉网络的点击率预测模型[J]. 计算机科学, 2025, 52(6A): 240900092-6.
LUAN Fangjun, ZHANG Fengqiang, YUAN Shuai. Click-through Rate Prediction Model Based on Feature Embedding Gating and PolynomialFeature Crossover Networks[J]. Computer Science, 2025, 52(6A): 240900092-6. - LUAN Fangjun, ZHANG Fengqiang, YUAN Shuai
- Computer Science. 2025, 52 (6A): 240900092-6. doi:10.11896/jsjkx.240900092
-
Abstract
PDF(2236KB) ( 55 )
- References | Related Articles | Metrics
-
Click-through rate prediction plays a crucial role in recommender systems and online advertisements,and feature embedding and feature interactions are key factors affecting prediction accuracy.However,many existing models mainly focus on designing feature interaction structures,and they usually use simple computational methods such as Hadamard product,inner pro-duct,single vector-level or bit-level feature interaction or combining multilayer perceptron for implicit feature interaction,which may have limitations in dealing with complex feature interactions.Tomake up for the above shortcomings,a click-through rate prediction model based on feature embedding gating and polynomial feature crossover networks is proposed.First,in order to achieve more effective feature interactions,polynomial featurecrossover network is proposed,where the network realizes feature crossover by combining Hadamard product and inner product to achieve explicit higher-order feature crossover in a recursive form.Then,fine-grained feature interaction is achieved by fusing two parallel polynomial feature crossover networks for vector-level and bit-level feature crossover.Finally,in order to dynamically learn the importance of feature embeddings and increase the variability of the inputs to the feature interaction network,feature embedding gating is proposed,which learns the weights of the features from the vector level and the bit level so that the interaction network can be more targeted to capture different feature interaction information.The model performance is evaluated on four open benchmark datasets,and the model achieves AUC and Logloss of 0.814 9 and 0.437 2 respectively,on the Criteo dataset;0.766 3 and 0.366 1 on the Avazu dataset;0.971 6 and 0.198 4 on the Movielens dataset;and 0.985 8 and 0.138 7 on the Frappe dataset.The experimental results show that the proposedmodel exhibits better performance in click-through-rate prediction,and effectively improves the prediction accuracy.
-
Traffic Prediction Model Based on Decoupled Adaptive Dynamic Graph Convolution
郑创锐, 邓秀勤, 陈磊. 基于解耦自适应动态图卷积的交通预测模型[J]. 计算机科学, 2025, 52(6A): 240400149-8.
ZHENG Chuangrui, DENG Xiuqin, CHEN Lei. Traffic Prediction Model Based on Decoupled Adaptive Dynamic Graph Convolution[J]. Computer Science, 2025, 52(6A): 240400149-8. - ZHENG Chuangrui, DENG Xiuqin, CHEN Lei
- Computer Science. 2025, 52 (6A): 240400149-8. doi:10.11896/jsjkx.240400149
-
Abstract
PDF(2903KB) ( 50 )
- References | Related Articles | Metrics
-
Traffic prediction plays a crucial role in urban planning and traffic management.Traditional prediction methods based on machine learning and statistics are limited in their ability to capture complex nonlinear relationships and long-term dependencies,failing to capture the intricate spatiotemporal relationships within traffic networks.Existing models based on graph neural networks(GNN) mostly use preset static graphs,which cannot accurately reflect the actual topology of road networks,and almost all models simply consider the propagation process of traffic flow between different nodes,neglecting the traffic generation process at each node.To address these issues,we propose a decoupled adaptive dynamic graph convolutional network(DADGCN) model.This model effectively quantifies the dynamic correlations among different nodes through an adaptive dynamic graph mo-dule,thereby capturing the complex spatial dependencies in the traffic network.At the same time,it decouples the node traffic into propagated traffic and generated traffic in a data-driven manner.It utilizes a multi-head self-attention mechanism to process the decoupled signals,thus enhancing the model’s flexibility in handling complex traffic data and improving prediction accuracy.Experiments demonstrate that on the METR-LA andPEMS-BAY datasets,DADGCN achieves 7.78%,10.14% and 25.39%,21.19% improvement in MAE over 60 minutes compared to the diffusion convolution-based model DCRNN and Graph Wavenet.On the PEMS04 and PEMS08 datasets, DADGCN demonstrates significant improvements of 11.61% in MAPE and 3.90% in RMSE compared to the adaptive graph-based model MTGNN.This shows that the model is not only capable of more profoundly understanding the inherent dynamic features within traffic flows but also able to adapt to changes in various complex environments,providing more accurate and reliable data support for urban traffic management and planning.
-
Multivariate Time Series Prediction Based on Dynamic Graph Learning and Attention Mechanism
洪燚, 申时凯, 佘玉梅, 杨斌, 代飞, 王鉴潇, 张力逸. 基于动态图学习与注意力机制的多变量时间序列预测[J]. 计算机科学, 2025, 52(6A): 240700047-8.
HONG Yi, SHEN Shikai, SHE Yumei, YANG Bin, DAI Fei, WANG Jianxiao, ZHANG Liyi. Multivariate Time Series Prediction Based on Dynamic Graph Learning and Attention Mechanism[J]. Computer Science, 2025, 52(6A): 240700047-8. - HONG Yi, SHEN Shikai, SHE Yumei, YANG Bin, DAI Fei, WANG Jianxiao, ZHANG Liyi
- Computer Science. 2025, 52 (6A): 240700047-8. doi:10.11896/jsjkx.240700047
-
Abstract
PDF(3746KB) ( 44 )
- References | Related Articles | Metrics
-
Multivariate time series(MTS) prediction is challenging due to the complex temporal dependencies and dynamic correlations between variables.Most existing methods focus on single-dimension factors,without fully considering the complexity of multi-source data and evolving feature relationships over time,which limits their ability to capture dynamic dependencies in complex systems.To address these issues,this paper proposes a new model based on dynamic graph neural network(DGNN),which called DRLNet.DRLNet dynamically updates the graph adjacency matrix to adapt to time-varying correlations between variables.Additionally,it includes an attention mechanism that focuses on the evolution of connections between key nodes.A gated mechanism is also introduced to selectively combine historical dependency graphs by evaluating correlations between these nodes and the current time step.Experimental results on three multivariate time series datasets demonstrate that DRLNet outperforms mainstream baseline methods in terms of prediction accuracy and stability.Moreover,it can better capture key patterns and changes in time series data,enhancing its effectiveness for MTS prediction.
-
Anomaly Detection of Multi-variable Time Series Data Based on Variational Graph Auto-encoders
尹文萃, 谢平, 叶成绪, 韩佳新, 夏星. 基于变分图自编码器的多变量时序数据异常检测[J]. 计算机科学, 2025, 52(6A): 240700124-8.
YIN Wencui, XIE Ping, YE Chengxu, HAN Jiaxin, XIA Xing. Anomaly Detection of Multi-variable Time Series Data Based on Variational Graph Auto-encoders[J]. Computer Science, 2025, 52(6A): 240700124-8. - YIN Wencui, XIE Ping, YE Chengxu, HAN Jiaxin, XIA Xing
- Computer Science. 2025, 52 (6A): 240700124-8. doi:10.11896/jsjkx.240700124
-
Abstract
PDF(4500KB) ( 63 )
- References | Related Articles | Metrics
-
Multivariate time series data anomaly detection refers to identifying outliers in multivariate time series data.In order to solve the problem of complexity between multi-variable time series data and feature dependence between internal variables,this paper proposes an anomaly detection method for multi-variable time series data based on variational graph autoencoders.Firstly,a sliding window is used to extract variable embedding features,and a structural correlation graph is constructed based on feature similarity.Then the correlation between the multi-variable time series data is optimized through a variational graph autoencoder to improve the structural characteristics of the multi-variable time series data.Secondly,the multi-head attention mechanism is used to improve the feature representation between different channels of multi-variable time series data,which is fused with the structural information of multi-variable time series data.Finally,the extreme value theory is used to select the threshold and perform unsupervised anomaly detection.Experimental results show that the F1 scores of this model reaches 81.43% and 99.67% on SWaT,MSL and other datasets,respectively.
-
Cloud Platform Load Data Forecasting Method Based on Spatiotemporal Graph AttentionNetwork
李英健, 王永生, 刘晓君, 任渊. 基于时空图注意力网络的云平台负载数据预测方法[J]. 计算机科学, 2025, 52(6A): 240700178-8.
LI Yingjian, WANG Yongsheng, LIU Xiaojun, REN Yuan. Cloud Platform Load Data Forecasting Method Based on Spatiotemporal Graph AttentionNetwork[J]. Computer Science, 2025, 52(6A): 240700178-8. - LI Yingjian, WANG Yongsheng, LIU Xiaojun, REN Yuan
- Computer Science. 2025, 52 (6A): 240700178-8. doi:10.11896/jsjkx.240700178
-
Abstract
PDF(3221KB) ( 47 )
- References | Related Articles | Metrics
-
Real-time prediction of load data collected from cloud platform monitoring helps in early identification of future system performance trends in cloud operations.However,load data typically lacks of clear periodicity or regularity and contains significant noise interference.Existing methods suffer from deficiencies in feature learning planning,relying on other load features and struggling to capture the momentum of load trends.To achieve accurate and efficient load data prediction,this paper proposes a cloud platform load data prediction method based on a spatiotemporal graph attention network.Firstly,an improved empirical wavelet transform is applied to perform time-frequency domain transformation on the load data,reducing noise interference and obtaining effectively decomposed modal features.To enhance the model’s capability in handling spikes and non-periodic characte-ristics,key performance factors tailored to the load data characteristics are designed using financial technical indicators.Additionally,modal features and key performance factors are reconstructed with the original sequence to build a graph learning layer.The graph attention network is then used to dynamically capture the relationships between the load sequences and features,and a bidirectional long short-term memory network focuses on temporal dependency information.Experimental validation is conducted on load datasets from Amazon and Alibaba Cloud,and the results show that,compared to the best baseline model,RMSE is reduced by 13.44%,36.90%,7.41%,and 14.93% respectively on four datasets.
-
Study on Short-time Passenger Flow Data Generation and Prediction Method for RailTransportation
郜新军, 张梅欣, 朱力. 面向轨道交通的短时客流数据生成与预测方法研究[J]. 计算机科学, 2025, 52(6A): 240600017-5.
GAO Xinjun, ZHANG Meixin, ZHU Li. Study on Short-time Passenger Flow Data Generation and Prediction Method for RailTransportation[J]. Computer Science, 2025, 52(6A): 240600017-5. - GAO Xinjun, ZHANG Meixin, ZHU Li
- Computer Science. 2025, 52 (6A): 240600017-5. doi:10.11896/jsjkx.240600017
-
Abstract
PDF(3292KB) ( 32 )
- References | Related Articles | Metrics
-
With the acceleration of urbanization,the dynamic change of subway passenger flow and the perturbation caused by uncertainty will affect the quality of urban rail transit operation service in China.This study proposes a passenger flow data enhancement method based on generative adversarial network for networked rail transit operation,which generates a large amount of usable data with the same characteristics by using a small amount of original passenger flow data for data enhancement.On the basis of passenger flow data enhancement,we further study the accurate prediction method of rail transit operation posture based on spatio-temporal multidimensionality,and propose a passenger flow data prediction method based on long-short-term memory network,convolutional neural network,and graphical neural network,which can realize the accurate prediction of the passenger flow data of the rail transit in the temporal dimension and spatio-temporal dimension,respectively.The generation and prediction of short-time passenger flow data can effectively alleviate the pressure of passenger flow.Additionally,accurate passenger flow prediction provides a solid foundation for adjusting train operations,improves the quality of rail transit services,and offers theoretical support for future urban development planning.
-
Representation and Reasoning System Realization of Inconsistent Knowledge
朱福喜, 朱丽达. 不协调知识的表示和推理系统实现[J]. 计算机科学, 2025, 52(6A): 240700139-5.
ZHU Fuxi, ZHU Lida. Representation and Reasoning System Realization of Inconsistent Knowledge[J]. Computer Science, 2025, 52(6A): 240700139-5. - ZHU Fuxi, ZHU Lida
- Computer Science. 2025, 52 (6A): 240700139-5. doi:10.11896/jsjkx.240700139
-
Abstract
PDF(1782KB) ( 33 )
- References | Related Articles | Metrics
-
As a type of non-traditional logic,paraconsistent logic is capable of representing and addressing inconsistent knowledge in a rational manner.However,the realization of representation and reasoning of inconsistent knowledge remains an urgent research topic.This paper utilizes a paraconsistent logic system-annotated logic-as a model for achieving the representation and reasoning of inconsistent knowledge.Specifically,Python is employed as the tool for representing the basis.
-
Correntropy Based Multi-view Low-rank Matrix Factorization and Constraint Graph Learning for Multi-view Data Clustering
杜元花, 陈盼, 周楠, 施开波, 陈二阳, 张远鹏. 基于相关熵的多视角低秩矩阵分解和多视角数据聚类中的约束图学习[J]. 计算机科学, 2025, 52(6A): 240900131-10.
DU Yuanhua, CHEN Pan, ZHOU Nan, SHI Kaibo, CHEN Eryang, ZHANG Yuanpeng. Correntropy Based Multi-view Low-rank Matrix Factorization and Constraint Graph Learning for Multi-view Data Clustering[J]. Computer Science, 2025, 52(6A): 240900131-10. - DU Yuanhua, CHEN Pan, ZHOU Nan, SHI Kaibo, CHEN Eryang, ZHANG Yuanpeng
- Computer Science. 2025, 52 (6A): 240900131-10. doi:10.11896/jsjkx.240900131
-
Abstract
PDF(3976KB) ( 31 )
- References | Related Articles | Metrics
-
Most of the current multi-view clustering methods focus on unsupervised learning scenarios,which cannot utilize the label information in the data.Furthermore,they could not handle the outliers,which may exist in the data.In order to address these issues,this paper proposes a correntropy based multi-view low-rank matrix factorization(CMLMF) method for multi-view data semi-supervised clustering.Specifically,a constraint matrix is used to introduce label information,removing the influence of outliers in the affinity matrix and labels by maximizing the correntropy criterion.In order to make full use of the local structure information,a multi-views constrained graph learning framework based on the correntropy is also proposed to adaptively extract the local structure hidden in the multi-view data.In addition,a multi-views low-rank matrix factorization(CMLMF) model based on correntropy is proposed,which is combined with an adaptive graph learning framework to extract the global reconstruction information of the data.Finally,an effective optimization algorithm combining fencher conjugate(FC) and block coordinate update(BCU) is designed to solve the model.Experimental results show that,compared with the existing methods,the accuracy(ACC),normalized mutual information,(NMI),and the accuracy(Precision) are greatly improved,which verifies the effectiveness of the algorithm.
-
Extraction of Crustal Deformation Anomalies Based on Transformer-Isolation Forest
王雪鉴, 王毅恒, 孙新坡, 柳川, 加明, 赵超, 杨超. 基于Transformer-Isolation Forest的地壳形变异常提取[J]. 计算机科学, 2025, 52(6A): 240600155-6.
WANG Xuejian, WANG Yiheng, SUN Xinpo, LIU Chuan, JIA Ming, ZHAO Chao, YANG Chao. Extraction of Crustal Deformation Anomalies Based on Transformer-Isolation Forest[J]. Computer Science, 2025, 52(6A): 240600155-6. - WANG Xuejian, WANG Yiheng, SUN Xinpo, LIU Chuan, JIA Ming, ZHAO Chao, YANG Chao
- Computer Science. 2025, 52 (6A): 240600155-6. doi:10.11896/jsjkx.240600155
-
Abstract
PDF(4094KB) ( 41 )
- References | Related Articles | Metrics
-
GPS crustal deformation monitoring plays a vital role in the study of earthquake precursors.With the accumulation of observation data,traditional data processing methods face challenges in big data processing.This study proposes an algorithm based on Transformer network and reconstruction error training strategy.This algorithm learns the GPS crustal displacement data when there are no earthquakes by training the Transformer network,outputs normal data,and inputs the reconstruction error of the earthquake GPS crustal displacement data during abnormal times into the Isolation Forest anomaly detection algorithm model to determine whether it is a precursor to earthquake anomalies.We extract two pre-seismic event anomalies with Mw>5 from GPS crustal deformation data,and obtain more comprehensive and common anomaly data phenomena than previous studies.Statistical analysis shows that there are similar anomalies in the GPS crustal deformation data of these observation stations before multiple earthquakes,indicating the existence of similar crustal deformation accumulation and release patterns.These findings underscore the necessity to improve earthquake prediction and prevention by understanding earthquake mechanisms.
-
Internet Application User Profiling Analysis Based on Selection State Space Graph Neural Network
滕岷军, 孙腾中, 李彦辰, 陈媛, 宋沫飞. 基于选择状态空间图神经网络的互联网应用用户画像分析[J]. 计算机科学, 2025, 52(6A): 240900060-8.
TENG Minjun, SUN Tengzhong, LI Yanchen, CHEN Yuan, SONG Mofei. Internet Application User Profiling Analysis Based on Selection State Space Graph Neural Network[J]. Computer Science, 2025, 52(6A): 240900060-8. - TENG Minjun, SUN Tengzhong, LI Yanchen, CHEN Yuan, SONG Mofei
- Computer Science. 2025, 52 (6A): 240900060-8. doi:10.11896/jsjkx.240900060
-
Abstract
PDF(2313KB) ( 36 )
- References | Related Articles | Metrics
-
User profile analysis aims to delve into users’ preferences in internet applications,which holds significant importance for various practical applications like recommendation systems and personalized advertising.Recent research trends consider users and their interactions as nodes in a graph structure,transforming user profile construction into a node classification task and utilizing deep graph neural network technology for user feature extraction.However,these studies often fail to fully consider the differences in interaction types among different users and their temporal relationships,thereby limiting the accuracy of user profile analysis.In light of this,this paper proposes a graph neural network method based on selected state space for user profile analysis to simultaneously capture context information such as multi-user comparisons and temporal patterns implied by graph structure relationships.To effectively model the long-range dependency relationships in user operation sequences,we introduce a state space model into the graph neural network and combine it with a node prioritization strategy based on attention mechanisms to enhance context-aware reasoning,thereby improving the predictive performance of explicit user attributes such as gender and age.Experimental validation on two real internet application datasets confirms the effectiveness of our proposed method.
-
Resource Preference-sensitive Cloud Configuration Recommendation Method for Big DataApplications
梁哲恒, 吴悦文, 李永健, 张小陆, 沈桂泉, 苏林刚, 刘均乐. 资源偏好敏感的大数据应用云配置推荐方法[J]. 计算机科学, 2025, 52(6A): 240800114-9.
LIANG Zheheng, WU Yuewen, LI Yongjian , ZHANG Xiaolu , SHEN Guiquan, SU Lingang, LIU Junle. Resource Preference-sensitive Cloud Configuration Recommendation Method for Big DataApplications[J]. Computer Science, 2025, 52(6A): 240800114-9. - LIANG Zheheng, WU Yuewen, LI Yongjian , ZHANG Xiaolu , SHEN Guiquan, SU Lingang, LIU Junle
- Computer Science. 2025, 52 (6A): 240800114-9. doi:10.11896/jsjkx.240800114
-
Abstract
PDF(5314KB) ( 36 )
- References | Related Articles | Metrics
-
Big data and stream data computing have been widely used to support scenarios such as anomaly detection and early warning in smart grids.Cloud computing serves as the mainstream operating environment for big data and stream data applications.However,optimizing performance by selecting suitable cloud resources poses significant challenges.Current methods based on exhaustive configuration searches use all candidate cloud configurations as the search space,leading to excessively large search spaces and have the risk of getting stuck in local optima.To address this issue,this paper proposes a resource preference-sensitive cloud configuration recommendation method for big data applications.It employs a resource preference-sensitive random forest model as the probabilistic model in Bayesian optimization to balance the accuracy and cost of searches when the configuration option space is large.Experimental results show that,compared to the exhaustive configuration search method CherryPick,the proposed method improves search accuracy by 23% while reducing the number of searches by 25%~44%.Compared to the data-driven method RP-CH,the accuracy of search results is 10% lower,but the average number of searches is effectively reduced by 78%.
-
CSO-LSTM Based Power Prediction Method for New Energy Generation
顾慧杰, 方文崇, 周志烽, 朱文, 马光, 李映辰. 一种基于CSO-LSTM的新能源发电功率预测方法[J]. 计算机科学, 2025, 52(6A): 240600053-11.
GU Huijie, FANG Wenchong, ZHOU Zhifeng, ZHU Wen, MA Guang, LI Yingchen. CSO-LSTM Based Power Prediction Method for New Energy Generation[J]. Computer Science, 2025, 52(6A): 240600053-11. - GU Huijie, FANG Wenchong, ZHOU Zhifeng, ZHU Wen, MA Guang, LI Yingchen
- Computer Science. 2025, 52 (6A): 240600053-11. doi:10.11896/jsjkx.240600053
-
Abstract
PDF(4082KB) ( 32 )
- References | Related Articles | Metrics
-
With the rapid development and wide popularization of new energy generation technology,it has become a key part of the power system.Among them,the accurate prediction of new energy generation power is of great significance for the rational planning of power system.However,the existing new energy generation power prediction methods still have the following challenges:1)The hyperparameters of the prediction model based on deep neural network have an important impact on the prediction performance of the model,and most of the current algorithms still use the artificial method to assign the hyperparameters.2)It is difficult for the existing prediction models to efficiently mine the long-term dependencies in the time series data,thus affecting the prediction accuracy.To solve these problems,this paper proposes a CSO-LSTM(competitive swarm optimizer and long short-term memory) based method for the prediction of new energy generation power,which aims to use a two-stage model to comprehensively improve the prediction performance.Firstly,in the first stage of the model,a LSTM hyperparameter optimization algorithm based on competitive group optimization is proposed,which uses the good exploration ability and global optimization ability of competitive group optimization algorithm to realize the adaptive adjustment of the hyperparameters of the prediction model.Then,in the second stage of the model,a LSTM model based on the combined multi-gating mechanism is designed,which combines the self-attention gating mechanism and the combined multi-gating network to mine the long-term dependencies in the new energy generation time series data,so as to further adapt to the new energy generation patterns at different time scales.Finally,the proposed CSO-LSTM is compared with four advanced prediction methods on two real datasets and one simulation dataset,and the experimental results verify the effectiveness and efficiency of the proposed CSO-LSTM model.
-
Deep Learning Stock Price Probability Prediction Based on Multi-modal Feature Wavelet Decomposition
张永宇, 郭晨娟, 魏涵玥. 基于多模态特征小波分解的深度学习股价概率预测[J]. 计算机科学, 2025, 52(6A): 240600140-11.
ZHANG Yongyu, GUO Chenjuan, WEI Hanyue. Deep Learning Stock Price Probability Prediction Based on Multi-modal Feature Wavelet Decomposition[J]. Computer Science, 2025, 52(6A): 240600140-11. - ZHANG Yongyu, GUO Chenjuan, WEI Hanyue
- Computer Science. 2025, 52 (6A): 240600140-11. doi:10.11896/jsjkx.240600140
-
Abstract
PDF(3092KB) ( 47 )
- References | Related Articles | Metrics
-
This paper constructs an innovative deep learning model for probabilistic stock price prediction based on multi-modal feature wavelet decomposition(MWDPF).This model integrates multi-source heterogeneous information,including dynamic continuous features,dynamic categorical features,static continuous features,and static categorical features.Through a parallel fusion strategy,it fully explores the complementary information in different feature subspaces,comprehensively characterizing the multiple dimensions affecting stock price fluctuations.It adopts an auto-regressive recurrent neural network architecture,which can directly output the probability distribution prediction of stock price changes,rather than a single deterministic value prediction,more closely matching the actual probabilistic distribution characteristics of stock prices.Additionally,this model introduces wavelet decomposition technology to denoise the original time series,adaptively filtering out noise components at different scales,improving its ability to capture intrinsic fluctuation patterns.In the empirical analysis phase,this study collects multi-modal data from financial databases and internet forums,and through a series of preprocessing steps such as missing value imputation,outlier removal,and time alignment,as well as careful feature engineering and model optimization,achieves excellent prediction perfor-mance,significantly outperforming traditional statistical models and deep learning models,with substantial improvements in eva-luation metrics.The prediction results generated by the proposed model are used to construct a multi-factor stock selection strategy,achieving considerable excess returns in real-world backtesting,further verifying the effectiveness of the model in practical investment decision-making.This study provides an effective solution for stock price prediction,enriches the theories and methods of quantitative investment,and has significant theoretical and application value.
-
Modeling of Civil Aviation Passenger Individual and Social Preferences and Optimization of Flight Seat Allocation
赵耀帅, 张毅. 民航旅客个体和社交偏好建模及航班座位分配优化[J]. 计算机科学, 2025, 52(6A): 240600038-8.
ZHAO Yaoshuai, ZHANG Yi. Modeling of Civil Aviation Passenger Individual and Social Preferences and Optimization of Flight Seat Allocation[J]. Computer Science, 2025, 52(6A): 240600038-8. - ZHAO Yaoshuai, ZHANG Yi
- Computer Science. 2025, 52 (6A): 240600038-8. doi:10.11896/jsjkx.240600038
-
Abstract
PDF(2657KB) ( 36 )
- References | Related Articles | Metrics
-
In the civil aviation industry,one of the keys to improvingpassenger satisfaction lies in understanding travelers’ personalized needs and providing customized travel services,particularly in flight seat allocation.However,achieving this goal faces two major challenges: how to accurately model passenger preferences and how to allocate seats rationally.Traditional methods often require explicit knowledge of passengers’ true preferences,yet current strategies such as paid seat selection and first-come-first-served approaches struggle to fully satisfy passenger demands.To address this issue,it is essential to consider seat availability,spatial correlations,and social relationships among passengers.This paper proposes a novel solution that models passenger preferences from both individual and social dimensions,framing seat allocation as a combinatorial optimization problem aimed at maximizing the fulfillment of individual and social preferences while adhering to business rules and passenger value.The solution employs an iterative local search algorithm to optimize seat allocation.Experimental results demonstrate that this method effectively models passenger seat preferences and significantly enhances overall flight satisfaction.
-
Research and Practice on Key Technologies for Serverless Computing
周丹颖, 黄天昊, 刘如明. 服务器无感知计算关键技术研究及实践探索[J]. 计算机科学, 2025, 52(6A): 240700114-6.
ZHOU Danying, HUANG Tianhao, LIU Ruming. Research and Practice on Key Technologies for Serverless Computing[J]. Computer Science, 2025, 52(6A): 240700114-6. - ZHOU Danying, HUANG Tianhao, LIU Ruming
- Computer Science. 2025, 52 (6A): 240700114-6. doi:10.11896/jsjkx.240700114
-
Abstract
PDF(2901KB) ( 37 )
- References | Related Articles | Metrics
-
On account of the advantage of scale,cloud computing maximizes the value of computing.In recent years,serverless computing as a new paradigm of cloud computing,has emerged rapidly,and is profoundly reshaping the development,deployment,operation and maintenance of applications.Centered on application,serverless computing further refines the supply pattern of cloud service,simplifies the construction of cloud-based applications,effectively improves resource utilization.It represents a significant trend of cloud computing. Currently,serverless computing technologies are maturing and the related services are emerging.Function as a service(FaaS),edge function as a service(Edge FaaS),serverless container services and serverless application hosting services are typical serverless computing styles.Nowadays,serverless computing has already been well used in the fields,such as artificial intelligence,edge computing and big data analysis.Starting from the concept of serverless computing,this paper analyzes the value and development process of serverless computing,and dissects its core technologies and practical applications,explores its technological ecosystem and evolution trends.Finally,it gives development recommendations for serverless computing in our country.
-
Online Parallel SDN Routing Optimization Algorithm Based on Deep Reinforcement Learning
吴宗明, 曹继军, 汤强. 基于深度强化学习的在线并行SDN路由优化算法研究[J]. 计算机科学, 2025, 52(6A): 240900018-9.
WU Zongming, CAO Jijun, TANG Qiang. Online Parallel SDN Routing Optimization Algorithm Based on Deep Reinforcement Learning[J]. Computer Science, 2025, 52(6A): 240900018-9. - WU Zongming, CAO Jijun, TANG Qiang
- Computer Science. 2025, 52 (6A): 240900018-9. doi:10.11896/jsjkx.240900018
-
Abstract
PDF(3853KB) ( 47 )
- References | Related Articles | Metrics
-
The routing behavior of traditional SDN traffic engineering models based on deep reinforcement learning(DRL) is often unpredictable,and the traditional DRL-based routing scheme is unreliable if it simply applies the DRL algorithm to the communication network system.This paper proposes an online parallel SDN routing optimization algorithm based on DRL,so as to reliably utilize the trial-and-error DRL routing algorithm to improve network performance.The algorithm uses a combination of online parallel routing decision-making and offline training in the SDN framework to solve the SDN routing optimization problem.This method can alleviate the reliability issues arising from the deep reinforcement learning model’s lack of convergence and the exploration process.To a certain extent,it can also alleviate the negative impact of the unexplainability of the deep reinforcement lear-ning intelligent routing model and the unreliability of routing behavior under network emergencies.This paper evaluates the performance of the online parallel SDN routing optimization algorithm by extensive experiments on a real network topology.The experimental results show that the network performance of the proposed algorithm is better than the traditional DRL-based routing algorithm and OSPF algorithm.
-
OFDM Index Modulation Signal Detection Based on Deep Learning
王婵飞, 杨婧, 许亚美, 何继爱. 深度学习驱动的OFDM索引调制信号检测[J]. 计算机科学, 2025, 52(6A): 240900122-6.
WANG Chanfei, YANG Jing, XU Yamei, HE Jiai. OFDM Index Modulation Signal Detection Based on Deep Learning[J]. Computer Science, 2025, 52(6A): 240900122-6. - WANG Chanfei, YANG Jing, XU Yamei, HE Jiai
- Computer Science. 2025, 52 (6A): 240900122-6. doi:10.11896/jsjkx.240900122
-
Abstract
PDF(3272KB) ( 51 )
- References | Related Articles | Metrics
-
In the pursuit of optimizing orthogonal frequency division multiplexing(OFDM) systems,a notable challenge lies in the relative inadequacy of its detection performance.Meanwhile,deep neural network-based index modulation(DNN-IM) detection algorithms generally suffer from issues such as high bit error rates(BER) and significant loss values.To overcome these difficulties,this paper proposes an index modulation detection algorithm based on multilayer perceptron(MLP),namely MLP-IM algorithm.This algorithm employs an architecture designed with two fused connection layers and an output layer,utilizing meticulously selected activation functions to achieve precise restoration of data bits in OFDM index modulation systems.Firstly,the fundamental theories of OFDM systems are ingeniously applied to the data preprocessing stage.Subsequently,a comprehensive and intensive offline training of the MLP neural network model is conducted using simulated datasets,ensuring the model’s robustness and accuracy.During the detection phase,efficient detection of the OFDM Index Modulation system is achieved through the MLP-IM detection algorithm.Simulation results demonstrate that the proposed MLP-IM algorithm exhibits performance comparable to that of the maximum likelihood detection algorithm in terms of BER control and loss values,and in some scenarios even superior to the existing DNN-IM detection algorithm,with a performance improvement of 0.2~6 dB.
-
Three Dimensional DV-Hop Location Based on Improved Beluga Whale Optimization
陈悦, 冯锋. 基于改进白鲸优化算法的三维DV-Hop定位算法[J]. 计算机科学, 2025, 52(6A): 240800125-9.
CHEN Yue, FENG Feng. Three Dimensional DV-Hop Location Based on Improved Beluga Whale Optimization[J]. Computer Science, 2025, 52(6A): 240800125-9. - CHEN Yue, FENG Feng
- Computer Science. 2025, 52 (6A): 240800125-9. doi:10.11896/jsjkx.240800125
-
Abstract
PDF(3315KB) ( 37 )
- References | Related Articles | Metrics
-
To address the issues of low node localization accuracy and large errors in traditional three dimensional DV-Hop algorithms in wireless sensor networks when dealing with complex environments,an improved beluga whale optimization(IBWO) based three dimensional localization algorithm(IBWO-DV-Hop) is proposed.Firstly,by optimizing the minimum hop count of nodes through multiple communication radius and introducing a correction factor,and using a hop distance weighted optimization method to correct the average hop distance,the impact of communication radius uncertainty and hop count error on positioning accuracy is reduced.Secondly,IBWO is introduced instead of the least squares method to estimate the position of unknown nodes.The improvements include using a combination of Sobol sequence and reverse learning strategy in the initialization stage of the Beluga algorithm to improve the initial population and increase population diversity.Then,adaptive t-distribution mutation and adaptive Levy flight strategy are introduced in the exploration and development stages respectively to enhance the algorithm’s optimization ability.Finally,a lens imaging reverse learning strategy is introduced in the whale landing stage to enhance the algorithm’s global optimization ability.Experimental results show that compared with traditional three dimensional DV-Hop algorithms and other similar algorithms,the proposed algorithm has higher positioning accuracy.
-
RFID Indoor Positioning Method Based on Improved Random Forest Algorithm
蒋蔚, 郭成波, 寇家华, 张若宛, 郭艳玲. 基于改进随机森林算法的RFID室内定位方法[J]. 计算机科学, 2025, 52(6A): 240900124-7.
JIANG Wei, GUO Chengbo, KOU Jiahua, ZHANG Ruowan, GUO Yanling. RFID Indoor Positioning Method Based on Improved Random Forest Algorithm[J]. Computer Science, 2025, 52(6A): 240900124-7. - JIANG Wei, GUO Chengbo, KOU Jiahua, ZHANG Ruowan, GUO Yanling
- Computer Science. 2025, 52 (6A): 240900124-7. doi:10.11896/jsjkx.240900124
-
Abstract
PDF(3358KB) ( 42 )
- References | Related Articles | Metrics
-
In order to solve the problem of low application rate and poor positioning accuracy of the existing RFID technology in the field of logistics and warehousing high-precision positioning,a positioning method of RFID technology based on improved random forest model is proposed. Firstly,an environment is built in which multiple antennas read the received signal strength of the reference tag at the same time,and the iterative average filtering algorithm is used to collect the received signal strength values during the reading process,new properties are deduced from the existing received signal strength values by using a sliding window to expand the machine learning data set. Secondly,the random forest classification model is introduced to construct the basis of the random forest model,which takes the received signal strength and its new attributes as input and the X-axis and Y-axis coordinates as output. The relevant parameter values are determined through parameter analysis to improve the use effect of the random forest model in indoor positioning. Finally,the random forest classification model is used to predict the region to which the target label belongs,and then the random forest regression model of the corresponding region is used to predict the exact coordinates of the target label,so as to realize the indoor accurate positioning based on the received signal strength of radio frequency identification technology. In the indoor environment,the average positioning error that can be measured by the indoor positioning method of the radio frequency identification technology is 4.98 cm. Compared with other algorithms,the average positioning accuracy is improved by more than 80%,which can meet the positioning needs of items in high-density logistics storage scenarios.
-
Study on Multi-antenna Amplitude-Phase Difference Joint Modulation
张璐麟, 郑兴, 彭宇辉, 项楠天, 施昌涵, 苏江涛. 多天线幅值与相位差分联合调制方案研究[J]. 计算机科学, 2025, 52(6A): 240900128-4.
ZHANG Lulin, ZHENG Xing, PENG Yuhui, XIANG Nantian, SHI Changhan, SU Jiangtao. Study on Multi-antenna Amplitude-Phase Difference Joint Modulation[J]. Computer Science, 2025, 52(6A): 240900128-4. - ZHANG Lulin, ZHENG Xing, PENG Yuhui, XIANG Nantian, SHI Changhan, SU Jiangtao
- Computer Science. 2025, 52 (6A): 240900128-4. doi:10.11896/jsjkx.240900128
-
Abstract
PDF(2685KB) ( 38 )
- References | Related Articles | Metrics
-
MIMO system,a pivotal technology in 5G communication,offers spatial diversity and spatial multiplexing capabilities for wireless communication,which can significantly enhance communication quality.However,this technique also makes the MIMO signal detection complicated.To mitigate the complexity of the MIMO signal detection,an amplitude-phase differential joint modulation(A-DPM) scheme is proposed.The A-DPM utilizes phase differences and amplitudes to carry information.By employing MIMO precoding and maximum ratio combining(MRC),the system enables the receiver to obtain maximum signal power.The phase differential eliminates phase rotations caused by multipath channels and singular value decomposition(SVD) precoding on symbols,while converting the time-varying phase rotation induced by residual carrier frequency offset into a non-time-varying one.Consequently,the A-DPM receiver does not require intricate MIMO channel estimation,simplifying the signal detection process.Thesimulations conducted in a multipath rayleigh channel with residual frequency offset and sampling period offset environments confirm that the multi-antenna A-DPM scheme exhibits spatial diversity effects,outperforms space-time block coding(STBC) systems in terms of bit error rate performance,and has lower demodulation complexity compared to STBC systems.
-
Incremental Routing and Scheduling Based on Greedy in TSN
周飞飞, 马涛, 付振霄, 朱云飞, 虞扬. 一种基于贪婪的时间敏感网络增量路由调度方法[J]. 计算机科学, 2025, 52(6A): 240800090-6.
ZHOU Feifei, MA Tao, FU Zhenxiao, ZHU Yunfei, YU Yang. Incremental Routing and Scheduling Based on Greedy in TSN[J]. Computer Science, 2025, 52(6A): 240800090-6. - ZHOU Feifei, MA Tao, FU Zhenxiao, ZHU Yunfei, YU Yang
- Computer Science. 2025, 52 (6A): 240800090-6. doi:10.11896/jsjkx.240800090
-
Abstract
PDF(2224KB) ( 36 )
- References | Related Articles | Metrics
-
The development of modern industries,such as smart grid and intelligent driving,imposes stringent demands on network transmission delay and reliability.To tackle these challenges,the IEEE 802.1TSN working group proposes the concept of time-sensitive networks(TSN).Among them,the time-sensitive network model with cyclic queued forwarding(CQF) as the transmission mechanism has garnered significant attention in deterministic network research.However,traffic routing and sche-duling remain critical tasks that need to be addressed.This paper proposes an incremental routing selection scheme based on the greedy algorithm and Dijkstra’s algorithm,leveraging relevant traffic characteristics.This scheme is further integrated with load-balanced time slot offset selection to develop a joint routing and scheduling scheme, addressing the routing-scheduling challenges posed by complex traffic flows in multi-path scenarios.In terms of simulation,a TSN network topology model is established and verified through numerous experiments.The results demonstrate that,compared to traditional greedy and tabu search algorithms,the proposed scheme exhibits notable advantages in terms of time consumption and scheduling success rate.
-
Optimization Strategy of Task Offloading Based on Meta Reinforcement Learning
赵婵婵, 杨星辰, 石宝, 吕飞, 刘利彬. 基于元强化学习的任务卸载优化策略[J]. 计算机科学, 2025, 52(6A): 240800050-8.
ZHAO Chanchan, YANG Xingchen, SHI Bao, LYU Fei, LIU Libin. Optimization Strategy of Task Offloading Based on Meta Reinforcement Learning[J]. Computer Science, 2025, 52(6A): 240800050-8. - ZHAO Chanchan, YANG Xingchen, SHI Bao, LYU Fei, LIU Libin
- Computer Science. 2025, 52 (6A): 240800050-8. doi:10.11896/jsjkx.240800050
-
Abstract
PDF(2989KB) ( 40 )
- References | Related Articles | Metrics
-
With the rapid development of edge computing,task offloading has become a crucial strategy for enhancing system performance and resource utilization.Existing deep learning-based offloading methods face challenges in real-world applications,such as low sample efficiency and poor adaptability to new environments.To address these issues,a task offloading method based on meta-reinforcement learning(MRL-PPO) is proposed,aiming to effectively solve the efficient offloading of heterogeneous tasks in edge computing while minimizing task delay and energy consumption.A sequence-to-sequence(Seq2Seq) network with an attention mechanism is designed,modeling offloading tasks as a directed acyclic graph(DAG).The encoder encodes the offloading tasks,and the decoder outputs different offloading decisions based on the context vector,addressing the complexity of network training caused by varying task sequence dimensions.The attention mechanism allows the model to dynamically focus on key features of the offloading tasks,improving decision accuracy and efficiency.To optimize the performance of the PPO algorithm in complex environments,an intrinsic reward learning algorithm is introduced.Experimental results demonstrate that the proposed algorithm outperforms existing methods in different tasks,and can quickly adapt to new environments,effectively reducing delay and energy consumption during task processing.
-
Research on the Method of C-RAN Networking Planning Based on Clustering Model
李恒毅, 杨国, 魏波, 陈虹君. 基于聚类模型的C-RAN组网规划方法研究[J]. 计算机科学, 2025, 52(6A): 241000015-4.
LI Hengyi, YANG Guo, WEI Bo, CHEN Hongjun. Research on the Method of C-RAN Networking Planning Based on Clustering Model[J]. Computer Science, 2025, 52(6A): 241000015-4. - LI Hengyi, YANG Guo, WEI Bo, CHEN Hongjun
- Computer Science. 2025, 52 (6A): 241000015-4. doi:10.11896/jsjkx.241000015
-
Abstract
PDF(2106KB) ( 38 )
- References | Related Articles | Metrics
-
With the rapid deployment of 5G communication networks,their importance in the construction of an information-based society has become increasingly prominent.The application of 5G heterogeneous network technology and centralized C-RAN networking has brought efficient cell edge coordinated processing and cost savings,but it has also led to issues such as an excessively large frontend network scale and increased transmission line construction costs.To address this problem,this paper proposes a base station engineering planning method based on clustering and heuristic algorithms to investigate the optimal deployment locations for C-RAN base stations.This method constructs a K-means clustering model,using the Euclidean distance between base stations and AAU/RRU as a constraint,to seek the optimal base station deployment locations.In the simulation and result analysis,the Elbow method is combined to determine the optimal clustering K value.The C-RAN site locations determined based on this are more reasonable,ensuring connectivity to each wireless transceiver point while minimizing the cost of optical cable consumption.This method has good generalizability and can provide useful references for future mobile communication network planning and construction.
-
Design of Computation Offloading Strategy for Device-Cloud Face Recognition System
冀乃庚, 王伟鹏, 窦逸辛. 端云人脸识别系统计算卸载策略设计[J]. 计算机科学, 2025, 52(6A): 240600065-7.
JI Naigeng, WANG Weipeng, DOU Yixin. Design of Computation Offloading Strategy for Device-Cloud Face Recognition System[J]. Computer Science, 2025, 52(6A): 240600065-7. - JI Naigeng, WANG Weipeng, DOU Yixin
- Computer Science. 2025, 52 (6A): 240600065-7. doi:10.11896/jsjkx.240600065
-
Abstract
PDF(2799KB) ( 47 )
- References | Related Articles | Metrics
-
This paper addresses the challenges in face recognition system under the device-cloud collaborative structure,presenting a computation offloading strategy tailored in real-world scenarios to optimize recognition accuracy within resource constraints.Firstly,a device-cloud identification model integration method is proposed,which is different from the strict or lenient integration method,enhancing both true acceptance rate(TAR) and true rejection rate(TRR) to ensure that the collaborative accuracy of device-cloud is higher than device.Secondly,we propose a feature selection scheme based on the combination identification results,categorizing recognition risk levels by utilizing the statistical relationship between the recognition results and the proportion of positive and negative samples,and extracting feature combinations of high-risk scenarios.In addition,an optimization scheme for global resource control is proposed,which selects abnormal parks and terminals through statistical distribution differences and allocates more algorithm resources to improve global recognition accuracy.Finally,we also propose to use OC-SVM to adapt to scenarios with unequal sample distributions and outliers,facilitating the dynamic adjustment of the recall proportion.Experimental results demonstrate that the optimized scheme proposed in this paper is efficient in improving algorithmic accuracy within resource constraints,and it shows practical value and potential for application.
-
Safety-Critical Software Testing Modeling Method Based on MARTE and STAMP
薛雯耀, 王轶辰, 任庆玮. 基于MARTE和STAMP的安全关键软件测试建模方法[J]. 计算机科学, 2025, 52(6A): 240500080-10.
XUE Wenyao, WANG Yichen, REN Qingwei. Safety-Critical Software Testing Modeling Method Based on MARTE and STAMP[J]. Computer Science, 2025, 52(6A): 240500080-10. - XUE Wenyao, WANG Yichen, REN Qingwei
- Computer Science. 2025, 52 (6A): 240500080-10. doi:10.11896/jsjkx.240500080
-
Abstract
PDF(6717KB) ( 41 )
- References | Related Articles | Metrics
-
The application of model-based systems engineering(MBSE) methods in the development and testing of safety-critical software has become a current research hotspot.However,accurately and comprehensively modeling the safety attributes of software remains a significant challenge.Safety-critical software,typically embedded in real-time systems,must not only meet stringent functional and safety requirements but also execute operations correctly within strict time constraints to ensure real-time performance and system reliability.In modern software engineering,as the complexity of safety-critical software increases,traditional modeling methods can no longer adequately address the dual demands of high safety and real-time performance.This paper focuses on integrating safety characteristics into model-based testing techniques for safety-critical software,proposing an innovative modeling approach based on the MARTE(modeling and analysis of real-time and embedded systems) language and the STAMP(systems-theoretic accident model and process) theory.This approach extends MARTE stereotypes,adds tags to constrain non-functional properties,and incorporates the STAMP control structure model into the MARTE view hierarchy.A multi-view hybrid model is formed through iterative modeling using STPA(system theoretic process analysis) techniques.Steps in the STPA method,including control structure construction,identification of unsafe control actions,and causal scenario analysis,provide deeper analysis and greater potential for automation.Experimental results demonstrate that the proposed modeling method can effectively and clearly present both functional and non-functional performance requirements of software systems,thus better achieving the characterization of software safety properties based on models.This approach also provides a stronger technical foundation for automated modeling.In the future,we aim to further advance the automation of test model construction,develop software tools that can automatically implement model building and STPA safety analysis,and generate test cases and test systems,thereby enhancing the efficiency of model-based testing techniques.
-
Integrated PU Learning Method PUEVD and Its Application in Software Source CodeVulnerability Detection
包晟宏, 姚有健, 李小丫, 陈文. 集成式PU学习方法PUEVD及其在软件源码漏洞检测中的应用[J]. 计算机科学, 2025, 52(6A): 241100144-9.
BAO Shenghong, YAO Youjian, LI Xiaoya, CHEN Wen. Integrated PU Learning Method PUEVD and Its Application in Software Source CodeVulnerability Detection[J]. Computer Science, 2025, 52(6A): 241100144-9. - BAO Shenghong, YAO Youjian, LI Xiaoya, CHEN Wen
- Computer Science. 2025, 52 (6A): 241100144-9. doi:10.11896/jsjkx.241100144
-
Abstract
PDF(4427KB) ( 42 )
- References | Related Articles | Metrics
-
Compared to traditional methods,AI-based vulnerability detection reduces reliance on expert knowledge and improves detection efficiency.However,training these models typically requires many labeled samples,which are difficult to obtain in practice.Therefore,effectively utilizing large volumes of unlabeled code samples to enhance model performance under limited labeled data has become a critical issue in automatic software vulnerability detection.Positive-Unlabeled(PU) Learning,a semi-supervised approach,combines a small set of positive samples with a large number of unlabeled samples to train models like random forests.By assigning class scores to unlabeled samples,PU Learning generates pseudo-labeled data,improving training performance.However,PU Learning may generate incorrect labels when sample scores are close to the threshold.This paper proposes an integrated PU Learning method(PUEVD),to achieve semi-supervised vulnerability detection in source code.PUEVD first calculates class scores of unlabeled samples using a random forest,filters key features,and randomly selects feature subsets.For each subset,it calculates the similarity difference between misclassification-prone samples and reliable positive/negative samples.Based on ensemble learning,PUEVD aggregates these similarity differences across subsets to adjust and optimize class scores,reducing the risk of misclassification.Applied to vulnerability detection with limited labeled samples,PUEVD was validated on standard datasets,including CWE399,libtiff,and asterisk,showing improved AUC and F1 scores over traditional methods,thus demonstrating its effectiveness in vulnerability detection.
-
BiGCN-TL:Bipartite Graph Convolutional Neural Network Transformer Localization Model for Software Bug Partial Localization Scenarios
施恩译, 常舒予, 陈可佳, 张扬, 黄海平. BiGCN-TL:软件错误部分定位场景下二分图图卷积神经网络Transformer定位模型[J]. 计算机科学, 2025, 52(6A): 250200086-11.
SHI Enyi, CHANG Shuyu, CHEN Kejia, ZHANG Yang, HUANG Haiping. BiGCN-TL:Bipartite Graph Convolutional Neural Network Transformer Localization Model for Software Bug Partial Localization Scenarios[J]. Computer Science, 2025, 52(6A): 250200086-11. - SHI Enyi, CHANG Shuyu, CHEN Kejia, ZHANG Yang, HUANG Haiping
- Computer Science. 2025, 52 (6A): 250200086-11. doi:10.11896/jsjkx.250200086
-
Abstract
PDF(3699KB) ( 33 )
- References | Related Articles | Metrics
-
In modern complex software projects,software bugs and code changes exhibit a “many-to-many” correspondence:a single bug is often caused by multiple code changes,and a single code change can introduce multiple bugs.As a result,bug localization is often only partial,making it difficult to trace all relevant code changes.Traditional architectures typically extract semantic features of code changes and bug reports independently,relying solely on their respective contexts.However,given the large scale of modern software projects and their intricate code dependencies,such independent semantic extraction reduces the quality and robustness of individual text representations,ultimately degrading localization performance.To achieve comprehensive tracing of code related to software bugs,this paper proposes BiGCN-TL.This model focuses on enhancing the information interaction between different textual inputs,aiming to reduce reliance on the quality of individual text features.Even in scenarios where large-scale software projects exhibit complex dependencies and challenging semantic feature extraction from a single text,BiGCN-TL leverages efficient information exchange to extract high-quality semantic representations,thereby improving localization accuracy.Firstly,based on known partial localization relationships,we fine-tune a Transformer-based pre-trained model.Then,we innovatively model software bugs and code changes as a bipartite graph,leveraging the known “many-to-many” relationships.The fine-tuned encoder is used to generate the initial node representations.Secondly,this study design a link prediction task on the bipartite graph,training a GCN and a binary classification discriminator.Through graph convolution operations and attention mechanisms,node representations are dynamically updated,emphasizing the ability to promote textual information interaction and refine global classification features.The final output is a matching prediction score.Extensive comparative experiments conducted on multiple datasets validate the superiority of BiGCN-TL over traditional approaches.Additionally,ablation studies confirm the effectiveness of each module.Furthermore,the generalizability and robustness of BiGCN-TL are further verified by exploring a variety of combinations of pre-trained models and GCNs,combined with specific and visualization analysis.
-
Configuration-guided Directed Kernel Fuzzing for Real-time Linux
施鹤远, 陈世俊, 张强, 沈煜恒, 姜宇, 施荣华. 基于配置引导的实时Linux内核靶向模糊测试[J]. 计算机科学, 2025, 52(6A): 240400161-8.
SHI Heyuan, CHEN Shijun, ZHANG Qiang, SHEN Yuheng, JIANG Yu, SHI Ronghua. Configuration-guided Directed Kernel Fuzzing for Real-time Linux[J]. Computer Science, 2025, 52(6A): 240400161-8. - SHI Heyuan, CHEN Shijun, ZHANG Qiang, SHEN Yuheng, JIANG Yu, SHI Ronghua
- Computer Science. 2025, 52 (6A): 240400161-8. doi:10.11896/jsjkx.240400161
-
Abstract
PDF(3171KB) ( 58 )
- References | Related Articles | Metrics
-
The real-time Linux,due to its real-time characteristics,has been widely applied in various high-precision scenarios,which underscores the importance of its own security and reliability.However,the current methods for locating code sections related to real-time are limited,resulting in coverage-oriented kernel fuzzing tools,such as Syzkaller,lacking the ability to test this code comprehensively and thoroughly.To address this issue,this paper proposes a configuration-guided targeted fuzzing approach for the real-time Linux kernel.Our approach first constructs a kernel file tree by combining kernel configuration options,identi-fying real-time feature code,and building test targets.Next,it leverages the inter-function call relationships and basic block addresses within the real-time Linux kernel to define specific testing targets for real-time features.Finally,it utilizes a weight-based seed scheduling strategy to enhance the efficiency of directed testing in kernel fuzzing.In testing tasks across four versions of real-time Linux kernels,the proposed method identifies 58 kernel defects related to real-time features.Compared to general coverage-guided kernel fuzz testing method Syzkaller,our approach achieves a 17.06% increase in the basic block coverage of real-time feature code and a 65.39% improvement in the detection of vulnerabilities related to real-time features.Experimental results demonstrate that this method significantly enhances the capabilities of kernel fuzz testing tools in terms of coverage of real-time feature related code and directed testing ability.
-
Application of Requirements Traceability in Code Static Analysis
陈望旭, 文昊, 倪洋. 需求可追溯性在代码静态分析中的应用[J]. 计算机科学, 2025, 52(6A): 241000024-5.
CHEN Wangxu, WEN Hao, NI Yang. Application of Requirements Traceability in Code Static Analysis[J]. Computer Science, 2025, 52(6A): 241000024-5. - CHEN Wangxu, WEN Hao, NI Yang
- Computer Science. 2025, 52 (6A): 241000024-5. doi:10.11896/jsjkx.241000024
-
Abstract
PDF(2340KB) ( 36 )
- References | Related Articles | Metrics
-
For software requirements reverse engineering,traditional static code analysis method needs a large number of manual labeling,which is a huge and redundant burden in the research and development process.In view of this situation,a static code analysis method based on requirement traceability and using graph database storage structure is proposed.Firstly,the software method CallGraph is generated by static analysis method,and the graph database is used to store it,then the graph is initialized by a few of manual marks.Experimental results show that the proposed method can guarantee the requirement coverage on the nodes of the inferential method to reach a higher standard,and greatly reduce the workload of manual labeling.
-
Research on Efficient Code Generation Techniques for Array Computation for Vector DSPs
廖泽明, 刘桂开, 胡勇华, 谢安星. 向量DSP的数组计算高效代码生成技术研究[J]. 计算机科学, 2025, 52(6A): 240300156-7.
LIAO Zeming, LIU Guikai, HU Yonghua, XIE Anxing. Research on Efficient Code Generation Techniques for Array Computation for Vector DSPs[J]. Computer Science, 2025, 52(6A): 240300156-7. - LIAO Zeming, LIU Guikai, HU Yonghua, XIE Anxing
- Computer Science. 2025, 52 (6A): 240300156-7. doi:10.11896/jsjkx.240300156
-
Abstract
PDF(2429KB) ( 45 )
- References | Related Articles | Metrics
-
With the continuous development of large-scale integrated circuits technology,vector DSPs incorporating SIMD,VLIW and other instruction parallel processing technologies have gained more and more attention and applications in the field of high-performance computing.Adapting different kinds of algorithm function libraries becomes one of the key challenges for vector DSPs.Only by reducing the input of repetitive work in programming and concentrating more on code optimization based on vector DSP architecture and hardware resources can the application development efficiency be effectively improved.Taking into account the amount of data involved in the computation in vector DSP codes,we proposed an automatic generation method for efficient code generation based on template-based array computation,which implements automated dynamic cache allocation,data rearrangement for discontinuous data accesses,and optimization of scalar instructions,so that the generated code could use the dedicated vector resources of the processor.Experimental results show that using the technique to generate code substantially improves the efficiency of obtaining relevant function code,and that the average performance of the generated vector computation assembly code reaches about 75% of the average performance of handwritten assembly code,and has an average speedup ratio of 8.7 times compared to the performance of scalar assembly code on average.
-
Study on t/s Diagnosability and t/s Diagnostic Algorithm of (n,k)-Arrangement Graphs
张世豪, 冷明. (n,k)-排列图的t/s诊断度与t/s 诊断算法研究[J]. 计算机科学, 2025, 52(6A): 240700180-9.
ZHANG Shihao, LENG Ming. Study on t/s Diagnosability and t/s Diagnostic Algorithm of (n,k)-Arrangement Graphs[J]. Computer Science, 2025, 52(6A): 240700180-9. - ZHANG Shihao, LENG Ming
- Computer Science. 2025, 52 (6A): 240700180-9. doi:10.11896/jsjkx.240700180
-
Abstract
PDF(2508KB) ( 33 )
- References | Related Articles | Metrics
-
Given the increasingly severefault risks in multiprocessor systems,particularly in the field of supercomputers,enhancing system reliability and fault tolerance has emerged as a critical issue that requires urgent attention.In response to this need,the(n,k)-arrangement graph,as a novel interconnect network topology,has arisen.It is a generalization and variation of the star graph network,inheriting its inherent symmetry and fault tolerance while offering greater flexibility.However,the current research on the reliability of the(n,k)-arrangement graph is still incomplete.Based on this,exploring the t/s diagnosability and t/s diagnosis algorithm of the(n,k)-arrangement graph.Initially,we present a series of relevant topological properties.Subsequently,we prove the t/s diagnosability of the(n,k)-arrangement graph under the PMC model.Finally,we design a fast diagnosis algorithm with a time complexity of O(N log2N) to identify faulty nodes within the(n,k)-arrangement graph.The diagnosability of (n,k)-arrangement graphshavebeen identified,further refining the reliability metrics of (n,k)-arrangement graph networks,thereby offering crucial reliability performance criteria for their application and promotion.
-
CNFED:An Error Detection Tool for Floating-point Expressions Based on Condition Number
王盼龙, 王磊, 英津瑞, 刘博文, 高志勇. CNFED:一种基于条件数的浮点表达式误差检测工具[J]. 计算机科学, 2025, 52(6A): 240800070-8.
WANG Panlong, WANG Lei, YING Jinrui, LIU Bowen, GAO Zhiyong. CNFED:An Error Detection Tool for Floating-point Expressions Based on Condition Number[J]. Computer Science, 2025, 52(6A): 240800070-8. - WANG Panlong, WANG Lei, YING Jinrui, LIU Bowen, GAO Zhiyong
- Computer Science. 2025, 52 (6A): 240800070-8. doi:10.11896/jsjkx.240800070
-
Abstract
PDF(2486KB) ( 46 )
- References | Related Articles | Metrics
-
Floating-point numbers use finite precision to represent real numbers,and their inherent rounding errors can accumulate during calculations,potentially leading to serious errors that jeopardize program safety and reliability.Theoretically,the most precise method for detecting floating-point errors is exhaustive search of all possible floating-point inputs to determine the maximum error between actual computation results and theoretical values.However,the search space is enormous.Effectively and efficiently detecting maximum floating-point errors has been a challenge.Based on the study on condition numbers,a tool for floating-point expression error detection,CNFED,has been designed and implemented.CNFED divides the input interval into multiple sub-intervals,conducts random sampling and evaluation for each sub-interval to quickly locate multiple hotspot sub-intervals.It then hierarchically applies global and local search algorithms to these hotspot sub-intervals,using corresponding evaluation functions for filtering,ultimately identifying potential maximum floating-point errors and reporting the corresponding input values.Experiment selects 26 expressions from the FPBench standard test suite as test cases and compare CNFED with advanced detection tools ATOMU and HSED.The experimental results indicate that CNFED outperforms ATOMU in 96.15% of cases(25/26).Compared to the detection tool HSED for floating-point expressions,CNFED surpasses HSED in 34.62% of cases(9/26),while the average time taken by HSED is 4.8 times that of CNFED.
-
Analysis of DNS Threats and the Challenges of DNS Security
郁毅明, 陈远志, 郎君. DNS威胁面分析及其安全防护现状与挑战[J]. 计算机科学, 2025, 52(6A): 240900140-8.
YU Yiming, CHEN Yuanzhi, LANG Jun. Analysis of DNS Threats and the Challenges of DNS Security[J]. Computer Science, 2025, 52(6A): 240900140-8. - YU Yiming, CHEN Yuanzhi, LANG Jun
- Computer Science. 2025, 52 (6A): 240900140-8. doi:10.11896/jsjkx.240900140
-
Abstract
PDF(2185KB) ( 35 )
- References | Related Articles | Metrics
-
With the widespread adoption and growing complexity of the Internet,the domain name system(DNS),a core component of global network communications,has encountered intensifying challenges pertaining to security,privacy,and performance.Commencing from an analysis of prevalent DNS attacks,the threat landscape had been conducted on its protocols and the system itself,elucidating the current inadequacies and flaws of the DNS protocol from four aspects:integrity,confidentiality,availability,and authenticity.The current issues faced by DNS were summarized based on an expanded framework of information security essentials.Subsequently,the prevalent enhancements and protective measures for DNS were introduced,with a focus on the existing research endeavors conducted in three primary areas:protocol reinforcement,intrusion detection system augmentation,and system strengthening.These endeavors were then summarized and evaluated for their respective strengths and limitations.Ultimately,future research directions were proposed,emphasizing decentralization,and a pivotal construction area,the traffic data retention project was highlighted,offering insights and prospects for the developmental trajectory of DNS security technologies in the future.
-
Large Scale Network Defense Algorithm Based on Temporal Network Flow Watermarking Technology
朱柯达, 蔡瑞杰, 刘胜利. 基于时间式网络流水印技术的大规模网络防御算法[J]. 计算机科学, 2025, 52(6A): 240900110-6.
ZHU Keda, CAI Ruijie, LIU Shengli. Large Scale Network Defense Algorithm Based on Temporal Network Flow Watermarking Technology[J]. Computer Science, 2025, 52(6A): 240900110-6. - ZHU Keda, CAI Ruijie, LIU Shengli
- Computer Science. 2025, 52 (6A): 240900110-6. doi:10.11896/jsjkx.240900110
-
Abstract
PDF(3105KB) ( 51 )
- References | Related Articles | Metrics
-
Network attackers use multiple nodes such as dark net,stepping stones,and other relay links to create complex and unpredictable attack path,making it difficult to trace the entire chain,leading to unstable detection effectiveness of large-scale network.Therefore,a large-scale network defense algorithm based on temporal network pipeline detection technology is proposed.This algorithm uses time intervals to group large-scale network data streams,reducing false alarms caused by single parameter anomalies.By using convolutional encoding and traffic modulation,the temporal watermark embedding of the data stream is achieved,so that the watermark information can maintain a certain stability in the face of network traffic fluctuations,enhancing the robustness of the watermark.By comparing the joint centroid entropy of multiple streams in a temporal network,the marked streams containing watermarks could be identified quickly.Experiment shows that the proposed algorithm is less affected by jitter and can ensure watermark embedding,achieving defense against large-scale network attacks.
-
Federated Learning Privacy Protection Method Combining Dataset Distillation
王春东, 张清华, 付浩然. 一种结合数据集蒸馏的联邦学习隐私保护方法[J]. 计算机科学, 2025, 52(6A): 240500132-7.
WANG Chundong, ZHANG Qinghua, FU Haoran. Federated Learning Privacy Protection Method Combining Dataset Distillation[J]. Computer Science, 2025, 52(6A): 240500132-7. - WANG Chundong, ZHANG Qinghua, FU Haoran
- Computer Science. 2025, 52 (6A): 240500132-7. doi:10.11896/jsjkx.240500132
-
Abstract
PDF(2419KB) ( 30 )
- References | Related Articles | Metrics
-
Federated learning trains a global model by exchanging model parameters rather than data,with the goal of achieving privacy protection.However,a large number of studies have shown that attackers can infer the original training data through intercepted gradients,leading to privacy leakage on clients.In addition,the different sampling methods of different clients can lead to the phenomenon of non independent and identically distributed collected data,which can affect the overall training performance of the model.To cope with gradient inversion attacks,the data distillation method is introduced into the federated learning framework,while combining data augmentation methods to enhance the availability of synthesized data.In addition,to address the issue of data heterogeneity in medical data from different institutions,a batch normalization layer is introduced into the client to alleviate client drift and improve the overall performance of the model.Experimental results indicate that while achieving similar performance to other federated learning paradigms,the federated learning method combined with data distillation also enhances the protection of medical data privacy.
-
Privacy Preservation of Crowdsourcing Content Based on Adversarial Generative Networks
黄晓宇, 姜贺萌, 凌嘉铭. 基于对抗生成网络的众包内容隐私保护[J]. 计算机科学, 2025, 52(6A): 250200123-7.
HUANG Xiaoyu, JIANG Hemeng, LING Jiaming. Privacy Preservation of Crowdsourcing Content Based on Adversarial Generative Networks[J]. Computer Science, 2025, 52(6A): 250200123-7. - HUANG Xiaoyu, JIANG Hemeng, LING Jiaming
- Computer Science. 2025, 52 (6A): 250200123-7. doi:10.11896/jsjkx.250200123
-
Abstract
PDF(2171KB) ( 50 )
- References | Related Articles | Metrics
-
Crowdsourcing is an emerging alternative of outsourcing strategy that aims at making use of the wisdom of the crowd.Dueto the cheap and efficient characteristics of crowdsourcing,it’s widely recognized as an ideal solution for massive data oriented processing tasks,such as data labeling and model training.In crowdsourcing,however,on the task owners side,to get benifits from the wisdom of the unforeseen workers,they have to first make their private data unlimited accessed publicly,which is unsafe as the risk of the information leakage is concerned.To address this issue,we propose a crowdsourcing model PrivCS that can ensure content privacy security.The essential idea of PrivCS is to synthetiz some new data with regard to the task owners’ private data and pulicly publish the synthetic data to the workers instead of the real data.The tool we adopt to synthetiz the new data is the adversarial generative networks(GAN).There have been lots of exploitations show that GAN is privacy-preserving,therefore PrivCS of course inherits the same ability from GAN.We also study the theoretic performance of PrivCS,our analysis show that the outputs of PrivCS are comparable with respect to those derived from the real data,in terms of both data labeling and model training tasks.In addition,our experimental results support the theoretic findings.
-
Security Situation Assessment Method for Intelligent Water Resources Network Based on ImprovedD-S Evidence
夏卓群, 周子豪, 邓斌, 康琛. 一种基于改进D-S证据的智慧水利网络安全态势评估方法[J]. 计算机科学, 2025, 52(6A): 240600051-6.
XIA Zhuoqun, ZHOU Zihao, DENG Bin, KANG Chen. Security Situation Assessment Method for Intelligent Water Resources Network Based on ImprovedD-S Evidence[J]. Computer Science, 2025, 52(6A): 240600051-6. - XIA Zhuoqun, ZHOU Zihao, DENG Bin, KANG Chen
- Computer Science. 2025, 52 (6A): 240600051-6. doi:10.11896/jsjkx.240600051
-
Abstract
PDF(2076KB) ( 36 )
- References | Related Articles | Metrics
-
Intelligent water conservancy is an important industry and field of national key information infrastructure.The research on network security situation assessment technology provides powerful support for data protection and network security construction of smart water conservancy.This paper proposes a smart water conservancy situation assessment method based on improved D-S evidence theory,in response to the characteristics of smart water conservancy network models and the problems of insufficient objectivity and large evidence conflicts in network security situation assessment models based on a single D-S evidence theory.Firstly,in the face of massive water conservancy data,deep autoencoders are used to learn features and filter and reduce dimensionality of the data.Then,the processed data is handed over to a deep neural network for binary and multi classification calculations,and the results are fused to obtain the basic probability allocation function value as input for D-S evidence theory.Finally,the fusion rule of D-S evidence theory is used to obtain the final network security situation assessment result.Experimental results show that,compared to traditional situational assessment models,our method can maintain high accuracy while improving objectivity.
-
Study on High Payload Data Hiding Algorithm in Power System Network Communication Security
付佳佳, 黄东海, 卢建刚, 邓晓智, 亢中苗, 刘云. 电力系统网络通信安全中的高载荷信息隐藏算法研究[J]. 计算机科学, 2025, 52(6A): 240600024-8.
FU Jiajia, HUANG Donghai, LU Jiangang, DENG Xiaozhi, KANG Zhongmiao, LIU Yun. Study on High Payload Data Hiding Algorithm in Power System Network Communication Security[J]. Computer Science, 2025, 52(6A): 240600024-8. - FU Jiajia, HUANG Donghai, LU Jiangang, DENG Xiaozhi, KANG Zhongmiao, LIU Yun
- Computer Science. 2025, 52 (6A): 240600024-8. doi:10.11896/jsjkx.240600024
-
Abstract
PDF(4191KB) ( 36 )
- References | Related Articles | Metrics
-
With the rapid development of power network communication systems,a large amount of digital information can be transmitted more efficiently through the power network.However,while improving communication efficiency,the increasing risk of network attacks brings a series of security issues such as privacy leakage and information tampering.In this context,especially in scenarios such as unmanned inspection and remote monitoring of equipment,secure transmission of relevant defect information in the power system is particularly important.Therefore,to ensure the security of information transmission,data hiding technology has received widespread research and attention.In view of the commonly existing problems of low embedding payload and low security in current data hiding technology,this paper proposes a high payload data hiding algorithm based on second-order Sudoku matrix after comprehensive consideration of factors such as hiding capacity,steganographic quality,and security.This algorithm extends and encodes the original Sudoku to reconstruct a new second-order matrix.It aims to guide every two nine-bit data to be embedded into pairs of pixels of the original image in a way that minimizes distortion,thus achieving the technical goal of high payload data hiding.The selection of the original Sudoku is determined by a secret key shared in advance by the communication parties,which can be reliably transmitted through quantum key distribution technology,further enhancing the security of the proposed algorithm.
-
Edge Computing Based Approach for Node Trust Evaluation in Blockchain Networks
赵婵婵, 尉晓敏, 石宝, 吕飞, 刘利彬, 张子阳. 基于边缘计算的区块链网络节点信任评估方法[J]. 计算机科学, 2025, 52(6A): 240600153-8.
ZHAO Chanchan, WEI Xiaomin, SHI Bao, LYU Fei, LIU Libin, ZHANG Ziyang. Edge Computing Based Approach for Node Trust Evaluation in Blockchain Networks[J]. Computer Science, 2025, 52(6A): 240600153-8. - ZHAO Chanchan, WEI Xiaomin, SHI Bao, LYU Fei, LIU Libin, ZHANG Ziyang
- Computer Science. 2025, 52 (6A): 240600153-8. doi:10.11896/jsjkx.240600153
-
Abstract
PDF(2791KB) ( 61 )
- References | Related Articles | Metrics
-
To solve the problem of malicious devices or malicious data in edge computing,this paper proposes a method of node trust evaluation based on edge computing.Firstly,the blockchain technology and the method of building a cloud-edge framework are used to establish the trust relationship between edge devices.Secondly,a trust-based consensus mechanism is added to the overall trust evaluation method,and a time-sensitive function is introduced to determine the timeliness of trust value requirements in different scenarios.Finally,in order to avoid deviations caused by subjective factors in calculating the trust value,a method of adding stability coefficients is proposed to ensure the reliability of the trust value.Simulation experiments validate that the proposed trust evaluation method has a higher success rate of node interaction than other traditional trust evaluation methods at different malicious node ratios.When the malicious node ratio is 20%,the proposed method is similar to other methods,while when the malicious node ratio is 40%,the success rate is 0.82,and when the malicious node ratio is 60%,the success rate is 0.68%.As the normal nodes and malicious nodes’ trust values change over time,they follow opposite trends.The trust value of normal nodes reaches 0.9 in the end,while the trust value of malicious nodes decreases to 0.2.To better observe the change of trust va-lues of nodes,this paper sets the probability of malicious nodes performing malicious behaviors at 50%.The results also show that the proposed trust evaluation method can effectively respond to malicious nodes.Finally,the time consumption is compared in different node conditions,and the results show that the proposed method has lesser time consumption than traditional trust evaluation methods when dealing with a larger number of nodes.Therefore,the proposed method can make effective trust evaluations when facing a large number of malicious nodes.This method aims to determine how to select trusted nodes as target nodes for data storage and transmission,calculate the trust value of edge nodes,and reduce the impact of malicious nodes.
-
Circuit Module Reliability Calculation Method for Multi-target Tracking
金矫波, 朱添田. 一种面向多目标跟踪的电路模块可靠性计算方法[J]. 计算机科学, 2025, 52(6A): 240800094-6.
JIN Jiaobo, ZHU Tiantian. Circuit Module Reliability Calculation Method for Multi-target Tracking[J]. Computer Science, 2025, 52(6A): 240800094-6. - JIN Jiaobo, ZHU Tiantian
- Computer Science. 2025, 52 (6A): 240800094-6. doi:10.11896/jsjkx.240800094
-
Abstract
PDF(2301KB) ( 46 )
- References | Related Articles | Metrics
-
In the process of circuit reliability calculation,effectively tracking multiple target trajectories is one of the key measures for the targeted implementation of high-reliability circuit design. This paper selects the PTM method,which has been effectively validated in the accurate assessment of circuit reliability,as the modeling tool for multi-target tracking to ensure the precision of the calculations. The structure of the circuit and the computational principles of the PTM method are analyzed,and considering the faults in input signals,a hybrid encoding mechanism combining binary and decimal codes is proposed to implement the calculation strategy for multi-target trajectory tracking. This method can compute the reliability from the original input to any location within the circuit,and analyze the sensitive elements within the circuit during the calculation process,with the computational complexity being linearly related to the number of gates. Experimental results on benchmark circuits validate the effectiveness of the proposed method,and the sensitivity of the calculation results to various tracking targets is also analyzed and compared.
-
Source Recording Device Verification Forensics of Digital Speech Based on End-to-End DeepLearning
邹领, 朱磊, 邓阳君, 张红燕. 基于端到端深度学习的数字语音源录音设备确认取证[J]. 计算机科学, 2025, 52(6A): 240800028-7.
ZOU Ling, ZHU Lei, DENG Yangjun, ZHANG Hongyan. Source Recording Device Verification Forensics of Digital Speech Based on End-to-End DeepLearning[J]. Computer Science, 2025, 52(6A): 240800028-7. - ZOU Ling, ZHU Lei, DENG Yangjun, ZHANG Hongyan
- Computer Science. 2025, 52 (6A): 240800028-7. doi:10.11896/jsjkx.240800028
-
Abstract
PDF(2912KB) ( 37 )
- References | Related Articles | Metrics
-
Audio editing software and DeepFake technology make it easy to tamper and fake with digital audio and speech recordings.Thus the authenticity and integrity of a digital audio or speech recording must be established before it can be used as valid judicial evidence.Source recording device verification(SRDV) for digital speech is one of the key problems of device source forensics of digital audio.Given a speech recording and a recording device,SRDV is to determine whether or not the speech recording is recorded by the claim device.In recent years,deep learning technology has been widely applied across numerous fields and has yielded impressive results.However,current research related to audio recording device identification has primarily focused on source recording device identification(SRDI),and there have been no reports on SRDV methods based on deep learning.In this paper,a novel End to End(E2E) deep learning based SRDV scheme is proposed.The FBank feature,extracted from speech recordings,is used to characterize the device fingerprint and serves as the input to the deep neural network.For the deep architecture,we employ a parameter adjusted VGG M model.The entire network is trained using the Generalized End to End(GE2E) loss.The recording device embedding(RDE) is extracted through a Self attentive Pooling(SAP) layer followed by a fully connected layer.The Equal Error Rate(EER) is adopted as the evaluation metric.Evaluation experiments are conducted on a carefully designed development set and test set.Experimental results demonstrate that the proposed method achieves significant improvements in addressing the SRDV problem.
-
Dual-platform Key Agreement Protocol Based on Semidirect Product
张静, 王宇平. 基于半直积的双平台密钥协商协议[J]. 计算机科学, 2025, 52(6A): 240600036-6.
ZHANG Jing, WANG Yuping. Dual-platform Key Agreement Protocol Based on Semidirect Product[J]. Computer Science, 2025, 52(6A): 240600036-6. - ZHANG Jing, WANG Yuping
- Computer Science. 2025, 52 (6A): 240600036-6. doi:10.11896/jsjkx.240600036
-
Abstract
PDF(1726KB) ( 42 )
- References | Related Articles | Metrics
-
Cryptosystems based on non-commutative algebra have advantages in resisting quantum computing attacks,combinatorial group theory,as an important source of non-commutative algebra,has been widely applied in cryptography field.By utilizing some mathematical problems on combination groups,cryptographic schemes resistant to quantum computing attacks can be constructed.Semidirect products are an important concept in combinatorial group theory and the decomposition search problem on certain groups is intractable,so a non-commutative key agreement protocol based on which has been introduced.Bythe method of constructing semidirect products,a category of cryptographic-friendly non-commutative groups employed by both parties of communication as cryptographic platforms is constructed by adopting non-commutative groups and cyclic groups oftheir bijections.Then,the key agreement is achieved through a single interaction when the communicating parties select different cryptographic platform.The security of this protocol is reduced to the decomposition search-discrete logarithm problem on non-commutative groups,and the protocol’s resistance against algebraic and exhaustive attacks is elaborated in detail.Finally,using braid groups as an example,it is shown that the computational and storage complexities of this protocol are polynomial,so the protocol has practical application value.Additionally,the dual-platform design can conceal user information to a greater extent,which has the potential of applications in the post-quantum cryptography era.
-
Reversible Data Hiding in Fully Encrypted Images Based on Pixel Interval Partitioning andPrediction Recovery
刘润军, 肖凤军, 胡伟通, 王旭. 基于像素区间划分及预测恢复的完全加密图像可逆信息隐藏[J]. 计算机科学, 2025, 52(6A): 240900030-8.
LIU Runjun, XIAO Fengjun, HU Weitong, WANG Xu. Reversible Data Hiding in Fully Encrypted Images Based on Pixel Interval Partitioning andPrediction Recovery[J]. Computer Science, 2025, 52(6A): 240900030-8. - LIU Runjun, XIAO Fengjun, HU Weitong, WANG Xu
- Computer Science. 2025, 52 (6A): 240900030-8. doi:10.11896/jsjkx.240900030
-
Abstract
PDF(2885KB) ( 38 )
- References | Related Articles | Metrics
-
Reversible data hiding in encrypted images is a crucial cybersecurity technology for covert communication and privacy protection,and reversible data hiding schemes for fully encrypted images offer more reliable security.However,existing algorithms suffer from low embedding capacity and poor quality of recovered images,making them unsuitable for complex cloud environments.To address these issues,this paperproposes a reversible data hiding scheme for fully encrypted images based on pixel interval partitioning and prediction recovery.The image owner fully encrypts the original image using an image encryption key.The data hider embeds additional information into the encrypted image using a data embedding key through pixel interval partitioning.The image receiver can extract the embedded data using the data embedding key losslessly and achieve high-quality recovery of the image with the help of the image encryption key and pixel prediction.Experimental results demonstrate that the embedding rate of the proposed algorithm is more than doubled compared to the optimal existing algorithms,and the quality of the recovered images is significantly improved.
-
Study on System Security Testing Method Based on Digital Twin
李维峰, 谢江平. 基于数字孪生的系统安全测试方法研究[J]. 计算机科学, 2025, 52(6A): 240700068-7.
LI Weifeng, XIE Jiangping. Study on System Security Testing Method Based on Digital Twin[J]. Computer Science, 2025, 52(6A): 240700068-7. - LI Weifeng, XIE Jiangping
- Computer Science. 2025, 52 (6A): 240700068-7. doi:10.11896/jsjkx.240700068
-
Abstract
PDF(3053KB) ( 36 )
- References | Related Articles | Metrics
-
This paper explores a digital twin-based approach for system security testing,aiming to incorporate security design at the early stages of the system lifecycle through digital twins,thereby mitigating potential threats to industrial control systems(ICS).The methodology encompasses preliminary preparations,a four-phase penetration testing process,and report generation,ensuring that vulnerabilities are identified and validated prior to system construction.Leveraging digital twins to simulate system dynamics provides data fidelity for in-depth security analysis.The approach’s effectiveness is validated through simulations of sensor and switch environments,where Modbus TCP/IP protocol vulnerabilities are identified and assessed,leading to recommended improvements.This study offers a novel perspective on ICS security testing,demonstrating the potential of digital twins in security design,and lays a foundation for future system security analysis and testing.
-
Network Attack Mitigation Framework Based on Normalized Processing and TrafficLLM
成凯, 汤卫东, 谈林涛, 陈佳, 李鑫. 基于归一化处理和TrafficLLM的网络攻击缓解框架[J]. 计算机科学, 2025, 52(6A): 250200080-9.
CHENG Kai, TANG Weidong, TAN Lintao, CHEN Jia, LI Xin. Network Attack Mitigation Framework Based on Normalized Processing and TrafficLLM[J]. Computer Science, 2025, 52(6A): 250200080-9. - CHENG Kai, TANG Weidong, TAN Lintao, CHEN Jia, LI Xin
- Computer Science. 2025, 52 (6A): 250200080-9. doi:10.11896/jsjkx.250200080
-
Abstract
PDF(3477KB) ( 45 )
- References | Related Articles | Metrics
-
With the continuous expansion of the scale of power distribution and transformation network infrastructure,the information and communication traffic data generated by various types of security secondary equipment,edge terminal nodes,and business systems show significant differences in terms of format,protocol,and semantic characteristics.The main issues are reflected in the lack of a data normalization processing algorithm for multi-source heterogeneous network anomaly traffic detection in existing mitigation frameworks,the reliance of network attack behavior analysis on rule engines based on manual feature extraction,and the difficulty in determining effective network attack mitigation measures.To address the above pain points,a network attack mitigation framework based on normalized processing and TrafficLLM(NAMF-NPTLLM) is proposed.This framework includes four stages:feature selection,normalization processing,model fine-tuning,and generation of attack mitigation plans.Firstly,in the feature selection stage,an integrated voting mechanism is used to combine the results of various feature selection methods to accurately extract key features that have a significant impact on classification results.Secondly,the selected key features are norma-lized to generate a unified natural language token sequence expression,providing standardized input for the TrafficLLM model for traffic anomaly analysis in this network attack mitigation framework.Then,the TrafficLLM model is fine-tuned to enable it to understand prompt template instructions and learn the traffic patterns of attack behaviors.Finally,the fine-tuned large model is used for inference to generate attack mitigation instructions,allowing the framework to dynamically adjust network attack mitigation strategies based on the characteristics of attack behaviors.