Computer Science ›› 2025, Vol. 52 ›› Issue (8): 29-44.doi: 10.11896/jsjkx.250100062

• Software Engineering • Previous Articles     Next Articles

Review of Research on Deep Learning Compiler

LIU Zhengyu1,2, ZHANG Fan1, QI Xiaofeng1, GAO Yanzhao1, SONG Yijing3, FAN Wang3   

  1. 1 University of Information Engineering,Zhengzhou 450000,China
    2 Key Laboratory of Cyberspace Security,Ministry of Education of China,Zhengzhou 450000,China
    3 Institute of Big Data,Fudan University,Shanghai 200000,China
  • Received:2025-01-09 Revised:2025-04-29 Online:2025-08-15 Published:2025-08-08
  • About author:LIU Zhengyu,born in 1997,doctoral student,is a member of CCF(No.H4565G).His main research interest is distributed parallel and compiler.
    ZHANG Fan,born in 1981,professor,Ph.D supervisor.His main research interests include computer architecture and network security.
  • Supported by:
    National Key R & D Program of China (2022YFB4500900).

Abstract: With the rapid development of artificial intelligence,an increasing number of neural network models and algorithms have been proposed.Meanwhile,as Moore's Law gradually loses its effectiveness,a variety of new accelerators and computer architectures have emerged,creating an urgent demand for the efficient deployment of neural network models on these novel hardware platforms.Against this backdrop,deep learning compilers have emerged.Unlike traditional compilers,deep learning compi-lers take various network models as input,use a multi-level intermediate representation design to optimize models layer by layer,and perform hardware-specific optimizations in the backend for different hardware architectures,ultimately generating optimized executable programs.This paper first introduces the general framework of deep learning compilers,including their core components and overall process.It then categorizes and discusses the various optimization techniques used in compilers,summarizing recent research progress and highlighting the key research trends in the field.Finally,this paper organizes the current stage of deep learning compiler research and looks forward to future research directions based on the current state of existing research.

Key words: Deep learning, Compiler, Compiler optimization

CLC Number: 

  • TP314
[1]ZHANG X,SUN N,FANG C,et al.Predoo:precision testing of deep learning operators[C]//Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis.2021:400-412.
[2]SUN R Y.Optimization for Deep Learning:An Overview[J].Journal of the Operations Research Society of China,2020(3):1-46.
[3]LI M,LIU Y,LIU X,et al.The Deep Learning Compiler:A Comprehensive Survey[J].IEEE Transactions on Parallel and Distributed Systems,2021,32(3):708-727.
[4]WANG Y,XIE F.Extending Tensor Virtual Machine to Sup-port Deep-Learning Accelerators with Convolution Cores[C]//2022 26th International Conference on Engineering of Complex Computer Systems(ICECCS).2022:189-194.
[5]LÜCKE M,STEUWER M,SMITH A.Integrating a functional pattern-based IR into MLIR[C]//Proceedings of the 30th ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2021:12-22.
[6]CHELINI L,DREBES A,ZINENKO O,et al.Progressive raising in multi-level IR[C]//Proceedings of the 2021 IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2021:15-26.
[7]PIZZUTI F,STEUWER M,DUBACH C.Generating fast sparse matrix vector multiplication from a high level generic functional IR[C]//Proceedings of the 29th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2020:85-95.
[8]KLOPP D,ERDWEG S,PACAK A.A Typed Multi-level Datalog IR and Its Compiler Framework[C]//Proceedings of the ACM on Programming Languages.2024.
[9]GROSSMAN A,PAEHLER L,PARASYRIS K,et al.Compile:A large ir dataset from production sources[J].arXiv:2309.15432,2023.
[10]FEHR M,NIU J,RIDDLE R,et al.IRDL:an IR definition language for SSA compilers[C]//Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation.New York,NY,USA:Association for Computing Machinery,2022:199-212.
[11]BHAT S,GROSSER T.Lambda the ultimate SSA:optimizing functional programs in SSA[C]//Proceedings of the 20th IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2022:168-178.
[12]LI M,LIU Y,CHEN B,et al.Building a domain-specific compi-ler for emerging processors with a reusable approach[J].SCIENCE CHINA Information Sciences,2023,67(1):112101.1-112101.19.
[13]ZENG J,KOU M,YAO H.KunlunTVM:A Compilation Framework for Kunlun Chip Supporting Both Training and Inference[C]//Proceedings of the Great Lakes Symposium on VLSI 2022.New York,NY,USA:Association for Computing Ma-chinery,2022:299-304.
[14]LONG G,YANG J,LIN W.FusionStitching:Boosting Execu-tion Efficiency of Memory Intensive Computations for DL Workloads[J].arXiv:1911.11576,2019.
[15]LONG G,YANG J,ZHU K,et al.FusionStitching:Deep Fusion and Code Generation for Tensorflow Computations on GPUs[J].arXiv:1811.05213,2018.
[16]SHEN L,ZHOU W H,WANG F,et al.swLLVM:An Optimizing Compiler for the New Generation of Sunway Supercompu-ters[J].Journal of Software,2024,35(5):2359-2378.
[17]MAVROGEORGIS N,VASILADIOTIS C,MU P,et al.UNIFICO:Thread Migration in Heterogeneous-ISA CPUs without State Transformation[C]//Proceedings of the 33rd ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2024:86-99.
[18]CHATARASI P,NEUENDORFFER S,BAYLISS S,et al.Vyasa:A High-Performance Vectorizing Compiler for Tensor Convolutions on the Xilinx AI Engine[J].arXiv:2006.01331,2020.
[19]LAVAEE R,CRISWELL J,DING C.Codestitcher:inter-procedural basic block layout optimization[C]//Proceedings of the 28th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2019:65-75.
[20]LIU Y,WANG Y,YU R,et al.Optimizing CNN Model Infe-rence on CPUs[C]//2019 USENIX Annual Technical Confe-rence(USENIX ATC 19).2019:1025-1040.
[21]ANDERSON A,GREGG D.Optimal DNN primitive selectionwith partitionedboolean quadratic programming[C]//Procee-dings of the 2018 International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2018:340-351.
[22]AHN B H,LEE J,LIN J M,et al.Ordering Chaos:Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices[C]//Proceedings of Machine Learning and Systems.2020:44-57.
[23]SHI Y,YANG Z,XUE J,et al.WELDER:Scheduling DeepLearning Memory Access via Tile-graph[C]//17th USENIX Symposium on Operating Systems Design and Implementation(OSDI'23).2023.
[24]LIAO HH,LEE C L,LEE J K,et al.Support Convolution of CNN with Compression Sparse Matrix Multiplication Flow in TVM[C]//50th International Conference on Parallel Processing Workshop.New York,NY,USA:Association for Computing Machinery,2021:1-7.
[25]JAIN A,BHATTACHARYA S,MASUDA M,et al.Efficient Execution of Quantized Deep Learning Models:A Compiler Approach[J].arXiv:2006.10226.2020.
[26]GUAN H,SHEN X,LIM S H.Wootz:a compiler-based framework for fast CNN pruning via composability[C]//Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation.New York,NY,USA:Association for Computing Machinery,2019:717-730.
[27]KJOLSTAD F,AHRENS W,KAMIL S,et al.Tensor Algebra Compilation with Workspaces[C]//2019 IEEE/ACM International Symposium on Code Generation and Optimization(CGO).2019:180-192.
[28]MA Z,WANG H,XING J,et al.PowerFusion:A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR[J].arXiv:2307.04995,2023.
[29]ZENG J,KOU M Y,ZHENG X Y,et al.TVM_T:A High-Performance Neural Network Training Compiler Based on TVM[J].Science China Information Sciences,2023,53(12):2458-2471.
[30]RIVERA J,FRANCHETTI F,PÜSCHEL M.A compiler forsound floating-point computations using affine arithmetic[C]//Proceedings of the 20th IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2022:66-78.
[31]SOMMER L,AXENIE C,KOCH A.SPNC:an open-sourceMLIR-based compiler for fast sum-product network inference on CPUs and GPUs[C]//Proceedings of the 20th IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2022:290-300.
[32]THAKUR M,NANDIVADA V K.Compare less,defer more:scaling value-contexts based whole-program heap analyses[C]//Proceedings of the 28th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2019:135-146.
[33]CHEN D,LIU F,DING C,et al.Locality analysis through static parallel sampling[C]//Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation.New York,NY,USA:Association for Computing Machi-nery,2018:557-570.
[34]PENG C,LIU Q Z,CHEN C B.Loop Permutation and Auto-Tuning under the Polyhedral Model[J].Computer Engineering and Science,2023,45(12):2121-2134.
[35]ROCHA R C O,PETOUMENOS P,FRANKE B,et al.Loop rolling for code size reduction[C]//Proceedings of the 20th IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2022:217-229.
[36]BEHROOZI A,PARK S,MAHLKE S.Loner:utilizing the CPU vectordatapath to process scalar integer data[C]//Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2022:205-217.
[37]ROCHA R C O,PORPODAS V,PETOUMENOS P,et al.Vectorization-aware loop unrolling with seed forwarding[C]//Proceedings of the 29th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2020:1-13.
[38]HAJ-ALI A,AHMED N K,WILLKE T,et al.NeuroVectori-zer:end-to-end vectorization with deep reinforcement learning[C]//Proceedings of the 18th ACM/IEEE International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2020:242-255.
[39]ZHAO J,KRUSE M,COHEN A.A polyhedral compilationframework for loops with dynamic data-dependent bounds[C]//Proceedings of the 27th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2018:14-24.
[40]SIOUTAS S,STUIJK S,CORPORAAL H,et al.Loop transformations leveraging hardware prefetching[C]//Proceedings of the 2018 International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2018:254-264.
[41]CHANDRASEKHAR A,CHEN G,CHEN P Y,et al.IGC:the open source Intel graphics compiler[C]//Proceedings of the 2019 IEEE/ACM International Symposium on Code Generation and Optimization.Washington,DC,USA:IEEE Press,2019:254-265.
[42]FANG Y F,LI Y B,DONG E M,et al.Memory Access and Communication Fusion Compilation Optimization for the Sunway Many-Core Processor[J].Journal of Software,2024,35(6):1-20.
[43]DAMANI S,BARUA P,SARKAR V.Memory access schedu-ling to reduce thread migrations[C]//Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2022:144-155.
[44]KOEPLINGER D,FELDMAN M,PRABHAKAR R,et al.Spatial:a language and compiler for application accelerators[C]//Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation.New York,NY,USA:Association for Computing Machinery,2018:296-311.
[45]SILVA A FD,DE LIMA B N B,PEREIRA F M Q.Exploring the space of optimization sequences for code-size reduction:insights and tools[C]//Proceedings of the 30th ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2021:47-58.
[46]MA L,XIE Z,YANG Z,et al.RAMMER:Enabling HolisticDeep Learning Compiler Optimizations with rTasks[C]//Ope-rating Systems Design and Implementation.2020.
[47]YANG H,LIU Q R,FAN W,et al.Research on AutomaticScheduling Optimization of Deep Learning Based on Feature Importance[J].Computer Science,2024,51(7):22-28.
[48]RYU J,PARK E,SUNG H.One-shot tuner for deep learning compilers[C]//Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2022:89-103.
[49]HAN R,KIM H.Exponentially Expanding the Phase-Ordering Search Space via Dormant Information[C]//Proceedings of the 33rd ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2024:250-261.
[50]WANG H,TANG Z,ZHANG C,et al.Automating reinforcement learning architecture design for code optimization[C]//Proceedings of the 31st ACM SIGPLAN International Confe-rence on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2022:129-143.
[51]ZHANG H B,ZHOU X L,XING M J,et al.AutoConfig:AnAutomatic Configuration Mechanism for Deep Learning Compi-ler Optimization[J].Journal of Software,2024,35(6):2668-2686.
[52]PARK S,LATIFI S,PARK Y,et al.SRTuner:effective compi-ler optimization customization by exposing synergistic relations[C]//Proceedings of the 20th IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2022:118-130.
[53]COWAN M,MOREAU T,CHEN T,et al.Automatic generation of high-performance quantized machine learning kernels[C]//Proceedings of the 18th ACM/IEEE International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2020:305-316.
[54]BASTOUL C,ZHANG Z,RAZANAJATO H,et al.Optimizing GPU deep learning operators with polyhedral scheduling constraint injection[C]//Proceedings of the 20th IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2022:313-324.
[55]RIVERA J,FRANCHETTI F,PÜSCHEL M.An interval compiler for sound floating-point computations[C]//Proceedings of the 2021 IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2021:52-64.
[56]ALIAS C,PLESCO A.Data-aware process networks[C]//Proceedings of the 30th ACM SIGPLAN International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2021:1-11.
[57]ZHAO J,LI B,NIE W,et al.AKG:automatic kernel generation for neural processing units using polyhedral transformations[C]//Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation.New York,NY,USA:Association for Computing Machinery,2021:1233-1248.
[58]BAGHDADI R,RAY J,ROMDHANE M B,et al.Tiramisu:a polyhedral compiler for expressing fast and portable code[C]//Proceedings of the 2019 IEEE/ACM International Symposium on Code Generation and Optimization.Washington,DC,USA:IEEE Press,2019:193-205.
[59]DOERFERT J,SHARMA S,HACK S.Polyhedral expressionpropagation[C]//Proceedings of the 27th International Confe-rence on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2018:25-36.
[60]ACHARYA A,BONDHUGULA U,COHEN A.Polyhedral auto-transformation with no integer linear programming[C]//Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation.New York,NY,USA:Association for Computing Machinery,2018:529-542.
[61]KRUSE M,GROSSER T.DeLICM:scalar dependence removal at zero memory cost[C]//Proceedings of the 2018 International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2018:241-253.
[62]KROLIK A,VERBRUGGE C,HENDREN L.r3d3:optimizedquery compilation on GPUs[C]//Proceedings of the 2021 IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2021:277-288.
[63]PROKOPEC A,DUBOSCQ G,LEOPOLDSEDER D,et al.An optimization-driven incremental inline substitution algorithm for just-in-time compilers[C]//Proceedings of the 2019 IEEE/ACM International Symposium on Code Generation and Optimization.Washington,DC,USA:IEEE Press,2019:164-179.
[64]OTTONI G.HHVM JIT:a profile-guided,region-based compi-ler for PHP and Hack[C]//Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation.New York,NY,USA:Association for Computing Machinery,2018:151-165.
[65]BROCK J,DING C,XU X,et al.PAYJIT:space-optimal JITcompilation and its practical implementation[C]//Proceedings of the 27th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2018:71-81.
[66]LEOPOLDSEDER D,STADLER L,WÜRTHINGER T,et al.Dominance-based duplication simulation(DBDS):code duplication to enable compiler optimizations[C]//Proceedings of the 2018 International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2018:126-137.
[67]ZHU H,WU R,DIAO Y,et al.Roller:Fast and Efficient Tensor Compilation for Deep Learning[C]//The 16th USENIX Symposium on Operating Systems Design and Implementation(OSDI'22).2022.
[68]QIAO B,REICHE O,HANNIG F,et al.From loop fusion tokernel fusion:a domain-specific approach to locality optimization[C]//Proceedings of the 2019 IEEE/ACM International Symposium on Code Generation and Optimization.Washington,DC,USA:IEEE Press,2019:242-253.
[69]KURTH A,WOLTERS K,FORSBERG B,et al.Mixed-data-model heterogeneous compilation and OpenMP offloading[C]//Proceedings of the 29th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2020:119-131.
[70]LIU Y,HUANG L,WU M,et al.PPOpenCL:a performance-portable OpenCL compiler with host and kernel thread code fusion[C]//Proceedings of the 28th International Conference on Compiler Construction.New York,NY,USA:Association for Computing Machinery,2019:2-16.
[71]LI Y B,ZHAO R C,HAN L,et al.A Parallel CompilationFramework for Heterogeneous Many-Core Processors[J].Journal of Software,2019,30(4):981-1001.
[72]KIM C,JEONG S,CHO S,et al.Thread-aware area-efficienthigh-level synthesis compiler for embedded devices[C]//Proceedings of the 2021 IEEE/ACM International Symposium on Code Generation and Optimization.Virtual Event,Republic of Korea:IEEE Press,2021:327-339.
[73]BAGHSORKHI S S,MARGIOLAS C.Automating efficient variable-grained resiliency for low-power IoT systems[C]//Proceedings of the 2018 International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2018:38-49.
[74]HU P,LU M,WANG L,et al.TPU-MLIR:A Compiler ForTPU Using MLIR[J].arXiv:2210.15016,2022.
[75]ESSADKI M,MICHEL B,MAUGARS B,et al.Code Generation for In-Place Stencils[C]//Proceedings of the 21st ACM/IEEE International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2023:2-13.
[76]ZHENG L,LI Z,ZHANG H,et al.Alpa:Automating Inter- and {Intra-Operator} Parallelism for Distributed Deep Learning[C]//16th USENIX Symposium on Operating Systems Design and Implementation(OSDI 22).2022:559-578.
[77]ZHANG C,MA L,XUE J,et al.COCKTAILER:Analyzing and Optimizing Dynamic Control Flow in Deep Learning[C]//17th USENIX Symposium on Operating Systems Design and Implementation(OSDI'23).2023.
[78]NARDI L,KOEPLINGER D,OLUKOTUN K.Practical Design Space Exploration[C]//2019 IEEE 27th International Sympo-sium on Modeling,Analysis,and Simulation of Computer and Telecommunication Systems(MASCOTS).2019:347-358.
[79]ROTEM N,FIX J,ABDULRASOOL S,et al.Glow:GraphLowering Compiler Techniques for Neural Networks[J].ar-Xiv:1805.00907,2018.
[80]CHEN T,MOREAU T,JIANG Z,et al.TVM:An automatedEnd-to-End optimizing compiler for deep learning[C]//13th USENIX Symposium on Operating Systems Design and Implementation(OSDI'18).2018:578-594.
[81]RAGAN-KELLEY J,BARNES C,ADAMS A,et al.Halide:a language and compiler for optimizing parallelism,locality,and recomputation in image processing pipelines[J].ACM SIGPLAN Notices,2013,48(6):519-530.
[82]LATTNER C,AMINI M,BONDHUGULA U,et al.MLIR:A Compiler Infrastructure for the End of Moore's Law[J].arXiv:2002.11054,2020.
[83]VASILACHE N,ZINENKO O,THEODORIDIS T,et al.Tensor Comprehensions:Framework-Agnostic High-Performance Machine Learning Abstractions[J].arXiv:1802.04730,2018.
[84]ROESCH J,LYUBOMIRSKY S,KIRISAME M,et al.Relay:A High-Level Compiler for Deep Learning[J].arXiv:1904.08368,2019.
[85]ZHENG L,JIA C,SUN M,et al.Ansor:Generating {High-Performance} Tensor Programs for Deep Learning[C]//14th USENIX Symposium on Operating Systems Design and Implementation(OSDI 20).2020:863-879.
[86]FEAUTRIER P.Some efficient solutions to the affine scheduling problem.Part II.Multidimensional time | International Journal of Parallel Programming[EB/OL].[2024-05-26].https://springer.longhoe.net/article/10.1007/BF01379404.
[87]FEAUTRIER P.Some efficient solutions to the affine scheduling problem.I.One-dimensional time[J].International Journal of Parallel Programming,1992,21(5):313-347.
[88]LIM A W,LAM M S.Maximizing parallelism and minimizing synchronization with affine transforms[C]//Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming languages.New York,NY,USA:Association for Computing Machinery,1997:201-214.
[89]LIM A W,CHEONG G I,LAM M S.An affine partitioning algorithm to maximize parallelism and minimize communication[C]//Proceedings of the 13th International Conference on Supercomputing.Rhodes Greece:ACM,1999:228-237.
[90]BONDHUGULA U,BASKARAN M,KRISHNAMOORTHYS,et al.Automatic Transformations for Communication-Minimized Parallelization and Locality Optimization in the Polyhedral Model[C]//Compiler Construction.Berlin:Springer,2008:132-146.
[91]ZHANG C,DONG R,WANG H,et al.MAGPY:compiling eager mode DNN programs by monitoring execution states[C]//Proceedings of the 2024 USENIX Conference on Usenix Annual Technical Conference.2024:683-698.
[92]SHIN Y,PARK J,CHO S,et al.PIMFlow:Compiler and Runtime Support for CNN Models on Processing-in-Memory DRAM[C]//Proceedings of the 21st ACM/IEEE International Symposium on Code Generation and Optimization.New York,NY,USA:Association for Computing Machinery,2023:249-262.
[93]CASTRO-LOPEZ O,VEGA-LOPEZ I F.Multi-target compiler for the deployment of machine learning models[C]//Procee-dings of the 2019 IEEE/ACM International Symposium on Code Generation and Optimization.Washington,DC,USA:IEEE Press,2019:280-281.
[94]GUO D,YANG D,ZHANG H,et al.Deepseek-r1:Incentivizing reasoning capability in llms via reinforcement learning[J].ar-Xiv:2501.12948,2025.
[1] TANG Boyuan, LI Qi. Review on Application of Spatial-Temporal Graph Neural Network in PM2.5 ConcentrationForecasting [J]. Computer Science, 2025, 52(8): 71-85.
[2] ZHENG Cheng, YANG Nan. Aspect-based Sentiment Analysis Based on Syntax,Semantics and Affective Knowledge [J]. Computer Science, 2025, 52(7): 218-225.
[3] LIU Mengzhen, ZHOU Qinglei, HAN Lin, NIE Kai, LI Haoran, CHEN Mengyao, LIU Haohao. Research on Automatic Vectorization Benefit Evaluation Model Based on Particle SwarmAlgorithm [J]. Computer Science, 2025, 52(7): 248-254.
[4] ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan. Intelligent Prediction of Network Traffic Based on Large Language Model [J]. Computer Science, 2025, 52(6A): 241100058-7.
[5] GUAN Xin, YANG Xueyong, YANG Xiaolin, MENG Xiangfu. Tumor Mutation Prediction Model of Lung Adenocarcinoma Based on Pathological [J]. Computer Science, 2025, 52(6A): 240700010-8.
[6] TAN Jiahui, WEN Chenyan, HUANG Wei, HU Kai. CT Image Segmentation of Intracranial Hemorrhage Based on ESC-TransUNet Network [J]. Computer Science, 2025, 52(6A): 240700030-9.
[7] RAN Qin, RUAN Xiaoli, XU Jing, LI Shaobo, HU Bingqi. Function Prediction of Therapeutic Peptides with Multi-coded Neural Networks Based on Projected Gradient Descent [J]. Computer Science, 2025, 52(6A): 240800024-6.
[8] FAN Xing, ZHOU Xiaohang, ZHANG Ning. Review on Methods and Applications of Short Text Similarity Measurement in Social Media Platforms [J]. Computer Science, 2025, 52(6A): 240400206-8.
[9] YANG Jixiang, JIANG Huiping, WANG Sen, MA Xuan. Research Progress and Challenges in Forest Fire Risk Prediction [J]. Computer Science, 2025, 52(6A): 240400177-8.
[10] WANG Jiamin, WU Wenhong, NIU Hengmao, SHI Bao, WU Nier, HAO Xu, ZHANG Chao, FU Rongsheng. Review of Concrete Defect Detection Methods Based on Deep Learning [J]. Computer Science, 2025, 52(6A): 240900137-12.
[11] HAO Xu, WU Wenhong, NIU Hengmao, SHI Bao, WU Nier, WANG Jiamin, CHU Hongkun. Survey of Man-Machine Distance Detection Method in Construction Site [J]. Computer Science, 2025, 52(6A): 240700098-10.
[12] CHEN Shijia, YE Jianyuan, GONG Xuan, ZENG Kang, NI Pengcheng. Aircraft Landing Gear Safety Pin Detection Algorithm Based on Improved YOlOv5s [J]. Computer Science, 2025, 52(6A): 240400189-7.
[13] GAO Junyi, ZHANG Wei, LI Zelin. YOLO-BFEPS:Efficient Attention-enhanced Cross-scale YOLOv10 Fire Detection Model [J]. Computer Science, 2025, 52(6A): 240800134-9.
[14] ZHANG Hang, WEI Shoulin, YIN Jibin. TalentDepth:A Monocular Depth Estimation Model for Complex Weather Scenarios Based onMultiscale Attention Mechanism [J]. Computer Science, 2025, 52(6A): 240900126-7.
[15] HUANG Hong, SU Han, MIN Peng. Small Target Detection Algorithm in UAV Images Integrating Multi-scale Features [J]. Computer Science, 2025, 52(6A): 240700097-5.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!