Computer Science ›› 2025, Vol. 52 ›› Issue (3): 17-32.doi: 10.11896/jsjkx.241000043

• 3D Vision and Metaverse • Previous Articles     Next Articles

Survey on 3D Scene Reconstruction Techniques in Metaverse

SONG Xingnuo1, WANG Congyan2, CHEN Mingkai2   

  1. 1 Portland College,Nanjing University of Posts and Telecommunications,Nanjing 210023,China
    2 School of Communication and Information Engineering,Nanjing University of Posts and Telecommunications,Nanjing 210003,China
  • Received:2024-10-10 Revised:2024-12-05 Online:2025-03-15 Published:2025-03-07
  • About author:SONG Xingnuo,born in 2004,undergraduate.Her main research interests include computer vision,electronic science and technology.
    CHEN Mingkai,born in 1989,associate professor.His main research interests include wireless communication and signal processing.
  • Supported by:
    National Natural Science Foundation of China(62001246),Key R&D Program of Jiangsu Province,China(BE2023035) and Open Research Fund of Jiangsu Engineering Research Center of Communication and Network Technology, NJUPT(JERCCN202301).

Abstract: With the development of various technologies such as virtual reality(VR),augmented reality(AR),blockchain,and artificial intelligence(AI),the metaverse is gradually being applied in many fields such as gaming,education,healthcare,and business.As the core technology of the metaverse,3D reconstruction technology has attracted attention due to its extremely high research value and wide application prospects.Traditional 3D reconstruction techniques perform poorly in processing metaverse tasks characterized by real-time interactivity,with significant room for improvement in computational efficiency and reconstruction model accuracy.Therefore,how to optimize 3D reconstruction technology,improve accuracy and robustness,and provide users with a more realistic and real-time interactive experience has become a current research hotspot.This paper tracks and summarizes the 3D reconstruction techniques based on scene generation in the metaverse in recent years.Firstly,we review the deve-lopment history of the metaverse,point out the challenges faced by 3D reconstruction technology,and propose solutions based on two different 3D representations.Then,3D reconstruction techniques based on 3D Gaussian and Neural Radiance Field(NeRF) representations are sorted out separately.Next,the innovative fusion methods of 3D reconstruction technology with tactile signals and large language models are mainly analyzed.Finally,the challenges faced by scenebased 3D reconstruction technology in the metaverse are discussed in detail,and corresponding future research directions are proposed.

Key words: Metaverse, 3D reconstruction, 3D Gaussians, Neural radiation field, Visual and tactile sensors, Large language model

CLC Number: 

  • TP391.41
[1]LI Y.How to name and define“metaverse”and other concepts? Consensus reached at the National Science and Technology Terminology Committee Symposium[EB/OL].https://www.chinanews.com.cn/sh/2022/09-14/9852341.shtml.
[2]MILDENHALL B,SRINIVASAN P P,TANCIK M,et al.NeRF:Representing scenes as neural radiance fields for view synthesis[C]// European Conference on Computer Vision.Springer,2020:405-421.
[3]WANG P,LIU L J,LIU Y,et al.NeuS:Learning Neural Impli-cit Surfaces by Volume Rendering for Multi-view Reconstruction[J].arXiv:2106.10689,2023.
[4]YARIV L,GU J T,KASTEN Y,et al.Volume Rendering ofNeural Implicit Surfaces[J].arXiv:2106.12052,2021.
[5]LIN C H,MA W C,TORRALBA A,et al.BARF:Bundle-adjusting neural radiance fields[C]// ICCV.2021:5741-5751.
[6]TRUONG P,RAKOTOSAONA M J,MANHARDT F,et al.SPARF:Neural radiance fields from sparse and noisy poses[C]//CVPR.2023:4190-4200.
[7]BIAN W J,WANG Z R,LI K J,et al.NoPe-neRF:Optimising neural radiance field with no pose prior[C]// CVPR.2023:4160-4169.
[8]BIAN J W,BIAN W J,PRISACARIU V A,et al.PoRF:Pose Residual Field for Accurate Neural Surface Reconstruction[J].arXiv:2310.07449,2024.
[9]HEDMAN P,SRINIVASAN P P,MILDENHALL B,et al.Ba-king neural radiance fields for real-time view synthesis[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:5875-5884.
[10]REISER C,PENG S,LIAO Y,et al.KiloNeRF:Speeding upneural radiance fields with thousands of tiny mlps[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:14 335-14345.
[11]PHUOC N T,LI F,XIAO L.SNeRF:Stylized Neural Implicit Representations for 3D Scenes[J].arXiv:2207.02363,2022.
[12]CHEN Z,FUNKHOUSER T,HEDMAN P,et al.MobileNeRF:Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023.
[13]YARIV L,HEDMAN P,REISER C,et al.BakedSDF:Meshing Neural SDFs for Real-Time View Synthesis[J].arXiv:2302.14859,2023.
[14]TANG J X,ZHOU H,CHEN X K,et al.Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement[J].arXiv:2303.02091,2023.
[15]VERBIN D,HEDMAN P,MILDENHALL B,et al.Ref-Nerf:Structured view-dependent appearance for neural radiance fields[C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2022:5481-5490.
[16]YU A,LI R L,TANCIK M,et al.PlenOctrees for real-time rendering of neural radiance fields[C]// ICCV.2021.
[17]WANG L,ZHANG J K,LIU X H,et al.Fourier plenoctrees for dynamic radiance field rendering in real-time[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:13524-13534.
[18]YU A,YE V,TANCIK M,et al.pixelNeRF:Neural radiancefields from one or few images[J].arXiv:2012.02190,2020.
[19]CHEN A,XU Z X,ZHAO F Q,et al.MVSNeRF:Fast generalizable radiance field reconstruction from multiview stereo[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:14124-14133.
[20]HU T,SLIU S,CHEN Y L,et al.EfficientNeRF efficient neuralradiance fields[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:12902-12911.
[21]LIU L J,GU J T,LIN K Z,et al.Neural sparse voxel fields[J].arXiv:2007.11571,2021.
[22]WU L,LEE J Y,BHATTAD A,et al.Diver:Realtime and accurate neural radiance fields with deterministic integration for volume rendering[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:16200-16209.
[23]MULLER T,EVANS E,SCHIED C,et al.Instant NeuralGraphics Primitives with a Multiresolution Hash Encoding [J].ACM Transactions on Graphics,2022,41(4):102.
[24]TREVITHICK A,YANG B.GRF:Learning a general radiance field for 3D representation and rendering[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:15182- 15192.
[25]JAIN A,TANCIK T,ABBEEL P.Putting NeRF on a diet:Semantically consistent few-shot view synthesis[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:5885-5894.
[26]XU D J,JIANG Y F,WANG P H,et al.SinNeRF:Trainingneural radiance fields on complex scenes from a single image[J].arXiv:2204.00928,2022.
[27]MILDENHALL B,SRINIVASAN P P,TANCIK M,et al.NeRF:Representing scenes as neural radiance fields for view synthesis[C]// European Conference on Computer Vision.Springer,2020:405-421.
[28]JENSEN R,DAHL A,VOGIATZIS G,et al.Large scale multi-view stereopsis evaluation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2014:406- 413.
[29]MILDENHALL B,SRINIVASAN P P,ORTIZ-CAYON R,et al.Local light field fusion:Practical view synthesis with prescriptive sampling guidelines[J].ACM Transactions on Gra-phics(TOG),2019,38(4):1-14.
[30]KYRIAZI L M,RUPPRECHT C,LAINA I,et al.ReaLFusion:360° reconstruction of any object from a single image[J].arXiv:2302.10663,2023.
[31]LIU R S,WU R D,HOORICK B V,et al.Zero-1-to-3:Zero-shot One Image to 3D Object[J].arXiv:2303.1132,2023.
[32]DENG C Y,JIANG C Y,QI C R,et al.NeRDi:Single-View NeRF Synthesis with Language-guided Diffusion as General Image Priors[J].arXiv:2212.03267,2022.
[33]WANG G C,CHEN Z X,LOY C C,et al.SparseNeRF:Distilling Depth Ranking for Few-shot Novel View Synthesis[J].arXiv:2303.16196,2023.
[34]SEO S,CHANG Y J,KWAK N.FlipNeRF:Flipped Reflection Rays for Few-shot Novel View Synthesis[J].arXiv:2306.1772,2023.
[35]PARK K,SINHA U,BARRON J T,et al.Nerfies:Deformable neural radiance fields[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.2020:5865-5874.
[36]PUMAROLA A,CORONA E,PONS-MOLL G P,et al.D-NeRF:Neural radiance fields for dynamic scenes[C]// Procee-dings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:10318-10327.
[37]TRETSCHK E,TEWARI A,OLLYANIK V,et al.Non-rigidneural radiance fields:Reconstruction and novel view synthesis of a dynamic scene from monocular video[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:12959-12970.
[38]LI T Y,SLAVEHEVA M,ZOLLHOEFER M,et al.Neural 3D video synthesis from multi-view video[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:5521-5531.
[39]FANG J M,YI T R,WANG X G,et al.Fast dynamic radiance fields with time-aware neural voxels[C]// SIGGRAPH Asia Conference.2022:1-9.
[40]MILDENHALL B,HEDMAN P,MARTIN-BRUALLA R,et al.NeRF in the dark:High dynamic range view synthesis from noisy raw images[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:16190-16199.
[41]LI Y Z,LI S,SITZMANN V,et al.3D neural scene representations for visuomotor control[J].arXiv:2107.04004,2021.
[42]CAO A,JOHNSON J.HexPlane:A fast representation for dynamic scenes[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023:130-141.
[43]LEE J,LEE S,JO C,et al.SemCity:Semantic Scene Generation with Triplane Diffusion[J].arXiv:2403.0777,2024.
[44]PENG S D,DONG J T,WANG Q Q,et al.Animatable neural radiance fields for modeling dynamic human bodies[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:14314-14323.
[45]WANG Z Y,BAGAUTDINOV T,LOMBARDI S,et al.Lear-ning compositional radiance fields of dynamic human heads[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:5704-5713.
[46]HONG Y,PENG B,XIAO H Y,et al.HeadNeRF:A real-timeNeRF-based parametric head model[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:20374-20384.
[47]ZHANG J B,LI X Y,WAN Z Y,et al.FDNeRF:Few-shot dynamic neural radiance fields for face reconstruction and expression editing[C]// SIGGRAPH Asia Conference.2022:1-9.
[48]SHAO R Z,ZHENG Z R,TU H Z,et al.Tensor4D:Efficient neural 4D decomposition for high-fidelity dynamic reconstruction and rendering[C]// Proceedings of the IEEE/CVFConfe-rence on Computer Vision and Pattern Recognition.2024:16632-16642.
[49]ZHU Z X,CHEN Y T,WU Z R,et al.LATITUDE:Robotic global localization with truncated dynamic low-pass filter in city-scale NeRF[C]// IEEE International Conference on Robotics and Automation(ICRA).IEEE,2023:8326-8332.
[50]CHEN J H,QIN Y P,LIU L J,et al.NeRF-HuGS:ImprovedNeural Radiance Fields in Non-static Scenes Using Heuristics-guided Segmentation[J].arXiv:2403.17537,2024.
[51]JIANG C B,YANG J,HE S,et al.NeuralSlice:Neural 3D Triangle Mesh Reconstruction via Slicing 4D Tetrahedral Meshes.[C]//Proceedings of the International Conference on Machine Learning.New York:ACM,2023.
[52]KERBL B,KOPANAS G,LEIMKVHLER T,et al.3D gaussian splatting for real-time radiance field rendering[J].ACM Tran-sactions on Graphics(ToG),2023,42(4):1-14.
[53]GUEDON A,LEPETIT V.SuGaR:Surface-Aligned GaussianSplatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering[J].arXiv:2311.12775,2023.
[54]WACZYNSKA J,BORYCKI P,TADEJA S,et al.GaMeS:Mesh-Based Adapting and Modification of Gaussian Splatting[J].arXiv:2402.0145,2024.
[55]CHEN Y W,HE T,HUANG D,et al.MeshAnything:Artist-Created Mesh Generation with Autoregressive Transformers[J].arXiv:2406.10163,2024.
[56]LIU Y,GUAN H,LUO C C,et al.CityGaussian:Real-timeHigh-quality Large-Scale Scene Rendering with Gaussians[J].arXiv:2404.01133,2024.
[57]ZHU Z H,FAN Z W,JIANG Y F,et al.FSGS:Real-Time Few-shot View Synthesis using Gaussian Splatting Show affiliations[J].arXiv:2312.00451,2023.
[58]XIONG H L,MUTTUKURU S,UPADHYAY R,et al.SparseGS:Real-Time 360° Sparse View Synthesis using Gaussian Splatting[J].arXiv:2312.00206v2,2024.
[59]YANG C,LI S K,FANG J M,et al.GaussainObject:Just Taking Four Images to Get A High-Quality 3D Object with Gaussian Splatting[J].arXiv:2402.10259,2024.
[60]LI J H,ZHANG J W,BAI X,et al.DNGaussian:OptimizingSparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization[J].arXiv:2403.06912,2024.
[61]CHARATAN D,LI S Z,TAGLIASACCHI A,et al.pixelSplat:3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction[J].arXiv:312.12337,2024.
[62]SZYMANOWICZ S,RUPPRECHT C,VEDALDI A.SplatterImage:Ultra-Fast Single-View 3D Reconstruction[J].arXiv:2312.13150,2024.
[63]CHANG A X,THOMAS F H,GUIBAS L J,et al.ShapeNet:An information-rich 3D model repository[J].arXiv:1512.03012,2015.
[64]ZHANG J W,LI J H,YU X H,et al.CoR-GS:Sparse-View 3D Gaussian Splatting via Co-Regularization[J].arXiv:2405.12110,2024.
[65]LIU S H,ZHOU H,LIU Z,et al.Structure Gaussian SLAMwith Manhattan World Hypothesis[J].arXiv:2405.20031,2024.
[66]POOLE B,JAIN A,BARRON J T,et al.DreamFusion:Text-to-3D using 2D diffusion[J].arXiv:2209.14988,2022.
[67]TANG J X,REN J W,ZHOU H,et al.DreamGaussian:Generative gaussian splatting for efficient 3D content creation[J].ar-Xiv:2309.16653,2023.
[68]LIN C H,GAO J,TANG L,et al.Magic3D:High-resolutiontext-to-3d content creation[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New York:IEEE Press,2023:300-309.
[69]CHEN Z L,WANG F,WANG Y K,et al.Text-to-3D usingGaussian Splatting[J].arXiv:2309.16585,2024.
[70]YU Y H,ZHU S N,QIN H,et al.BoostDream:Efficient Refining for High-Quality Text-to-3D Generation from Multi-View Diffusion[J].arXiv:2401.16764,2024.
[71]LIANG Y X,YANG X,LIN J T,et al.LucidDreamer:Towards high-fidelity text-to-3D generation via interval score matching[J].arXiv:2311.1128,2024.
[72]DI D L,YANG J H,LUO C F,et al.Hyper-3DG:Text-to-3D Gaussian Generation via Hypergraph[J].arXiv:2403.09236,2024.
[73]LIN Y T,DAI Z Z,ZHU S Y,et al.Gaussian-Flow:4D Reconstruction with Dynamic 3D Gaussian Particle[J].arXiv:2312.03431,2023.
[74]LI Z,CHEN Z,LI Z,et al.Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis[J].arXiv:2312.16812,2023.
[75]KRATIMENOS A,LEI J,DANIILIDIS K.DynMF:Neural Motion Factorization for Real-time Dynamic View Synthesis with 3D Gaussian Splatting[J].arXiv:2312.00112,2023.
[76]YANG Z Y,YANG H Y,PAN Z J,et al.Real-time photorealistic dynamic scene representation and rendering with 4D gaussian splatting[J].arXiv:2310.10642,2023.
[77]SHAW R,SONG J,MOREAU A,et al.SWAGS:Sampling Windows Adaptively for Dynamic 3D Gaussian Splatting[J].arXiv:2312.13308,2023.
[78]SUN J K,JIAO H,LI G Y,et al.3DGStream:On-the-Fly Training of 3D Gaussians for Efficient Streaming of Photo-Realistic Free-Viewpoint Videos[J].arXiv:2403.01444,2024.
[79]ZHANG T,CONG Y,LI X M,et al.Robot tactile sensing:Vision based tactile sensor for force perception[C]// IEEE Annual International Conference on CYBER Technology in Automation,Control,and Intelligent Systems(CYBER).2018:1360-1365.
[80]CUI S W,WAN R,HU J Y,et al.In-hand object localization using a novel high-resolution visuotactile sensor[J].Transactions on Industrial Electronics,2021,69(6):6015-6025.
[81]ZHANG L W,WANG Y,JIANG Y.Tac3D:A novel vision-based tactile sensor for measuring forces distribution and estimating friction coefficient distribution[J].arXiv:2202.06211,2022.
[82]HU J Y,CUI S W,WANG S,et al.GelStereo Palm:A novelcurved visuotactile sensor for 3D geometry sensing[C]// IEEE Transactions on Industrial Informatics.2023:1-10.
[83]ZHANG X,ZHANG Y,MA Y,et al.RealSense:Real-time compressive spectrum sensing testbed over TV white space[C]// IEEE 18th International Symposium on World of Wireless,Mobile and Multimedia Networks(WoWMoM).Macau:IEEE Press,2017:1-3.
[84]TROFATTER J A,DLOUHY S R,DEMYER W,et al.Pelizaeus-Merzbacher disease:Tight linkage to proteolipid protein gene exon variant[J].Proceedings of the National Academy of Sciences,1989,86(23):9427-9430.
[85]KUPPUSWAMY N,ALSPACH A,UTTAMCHANDANI A,et al.Soft-Bubble grippers for robust and perceptive manipulation[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS).2020:9917-9924.
[86]LI S J,YE L Q,XIA C K,et al.Design of a tactile sensing robo-tic gripper and its grasping method[C]// IEEE International Conference on Systems,Man,and Cybernetics(SMC).2021:894- 901.
[87]DU Y P,ZHANG G L,ZHANG Y Z,et al.High-resolution 3-dimensional contact deformation tracking for fingervision sensor with dense random color pattern[J].IEEE Robotics and Automation Letters,2021,6(2):2147-2154.
[88]ZHANG G L,DU Y P,YU H Y,et al.DelTact:A visionbased tactile sensor using a dense color pattern[J].IEEE Robotics and Automation Letters,2022,7(4):10778-10785.
[89]LI Y Z,BAI P P,CAO H,et al.Imaging dynamic three-dimensional traction stresses[J].Science Advances,2022,8(11):984.
[90]DO W K,KENNEDY M.Densetact:Optical tactile sensor for dense shape reconstruction[C]// IEEE International Conference on Robotics and Automation(ICRA).2022:6188-6194.
[91]COMI M,LIN Y J,CHURCH A,et al.TouchSDF:A DeepSDF Approach for 3D Shape Reconstruction using Vision-Based Tactile Sensing[J].arXiv:2311.12602,2023.
[92]SWANN A,STRONG M,DO W K,et al.Touch-GS:Visual-Tactile Supervised 3D Gaussian Splatting[J].arXiv:2403.09875,2024.
[93]YIN F K,CHEN X,ZHANG C,et al.Shapegpt:3D shape ge-neration with a unified multi-modal language model[J].arXiv:2311.17618,2023.
[94]QI Z Y,FANG Y,SUN Z Y,et al.Gpt4point:A unified framework for point-language understanding and generation[J].ar-Xiv:2312.02980,2023.
[95]YANG Y,SUN F Y,WEIHS L,et al.Holodeck:Languageguided generation of 3D embodied Ai environments[C]// CVPR.IEEE/CVF,2024:20-25.
[96]ZHOU X Y,RAN X J,XIONG Y J,et al.GALA3D:Towards text-to-3D complex scene generation via layoutguided generative gaussian splatting[J].arXiv:2402.07207,2024.
[97]TORRE F D L,FANG C M,HUANG H,et al.Llmr:Real-time prompting of interactive worlds using large language models[J].arXiv:2309.12276,2023.
[98]SUN C Y,HAN J L,DENG W J,et al.3D-gpt:Procedural 3D modeling with large language models[J].arXiv:2310.12945,2023.
[99]HU Z N,LSCEN A,JAIN A,et al.SceneCraft:An LLM agent for synthesizing 3D scene as Blender code[J].arXiv:2403.01248,2024.
[100]CHEN G K,WANG W G.A Survey on 3D Gaussian Splatting[J].arXiv:2401.03890,2024.
[101]GAO K,GAO Y,HE H J,et al.NeRF:Neural Radiance Field in 3D Vision,A Comprehensive Review[J].arXiv:2210.00379,2023.
[102]BARRON J T,MILDENHALL B,VERBIN D,et al.Mip-NeRF 360:Unbounded anti-aliased neural radiance fields[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:5470-5479.
[103]HEDMAN P,PHILIP J,PRICE T,et al.Deep blending for free-viewpoint image-based rendering[J].ACM Transactions on Graphics(TOG),2018,37(6):1- 15.
[104]DAI A,CHANG A X,SAVVA M,et al.ScanNet:Richly-annotated 3D reconstructions of indoor scenes[C]//Computer Vision and Pattern Recognition(CVPR).IEEE Press,2017.
[105]SNAVALY N,SEITZ S M,SZELISIK R.Modeling the World from Internet Photo Collections[J].International Journal of Computer Vision,2008,80(2):189-210.
[106]MAO A H,DAI C L,GAO L,et al.STD-Net:Structure-Preserving and Topology-Adaptive Deformation Network for Single-View 3D Reconstruction[J].IEEE Transactions on Visualization and Computer Graphics,2021,29(3):1785-1798.
[107]HONG Y C,ZHANG K,GU J X,et al.LRM:Large Recon-struction Model for Single Image to 3D[J].arXiv:2311.04400,2024.
[108]WU T,GAO L,ZHANG L X,et al.STAR-TM:STructureAware Reconstruction of Textured Mesh From Single Image[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2023,45(12):15680-15693.
[109]YUAN Z L,CAO J K,LI Z X,et al.SD-MVS:Segmentation-Driven Deformation Multi-View Stereowith Spherical Refinement and EM optimization[J].arXiv:2401.06385,2024.
[1] ZHONG Yue, GU Jieming. 3D Reconstruction of Single-view Sketches Based on Attention Mechanism and Contrastive Loss [J]. Computer Science, 2025, 52(3): 77-85.
[2] CHENG Dawei, WU Jiaxuan, LI Jiangtong, DING Zhijun, JIANG Changjun. Study on Evaluation Framework of Large Language Model’s Financial Scenario Capability [J]. Computer Science, 2025, 52(3): 239-247.
[3] HUANG Xueqin, ZHANG Sheng, ZHU Xianqiang, ZHANG Qianzhen, ZHU Cheng. Generative Task Network:New Paradigm for Autonomic Task Planning and Execution Based on LLM [J]. Computer Science, 2025, 52(3): 248-259.
[4] CAO Mingwei, ZHANG Di, PENG Shengjie, LI Ning, ZHAO Haifeng. Survey of Metaverse Technology Development and Applications [J]. Computer Science, 2025, 52(3): 4-16.
[5] WANG Jie, WANG Chuangye, XIE Jiucheng, GAO Hao. Animatable Head Avatar Reconstruction Algorithm Based on Region Encoding [J]. Computer Science, 2025, 52(3): 50-57.
[6] WANG Xingbo, ZHANG Hao, GAO Hao, ZHAI Mingliang, XIE Jiucheng. Talking Portrait Synthesis Method Based on Regional Saliency and Spatial Feature Extraction [J]. Computer Science, 2025, 52(3): 58-67.
[7] XU Siyao, ZENG Jianjun, ZHANG Weiyan, YE Qi, ZHU Yan. Dependency Parsing for Chinese Electronic Medical Record Enhanced by Dual-scale Collaboration of Large and Small Language Models [J]. Computer Science, 2025, 52(2): 253-260.
[8] ZENG Zefan, HU Xingchen, CHENG Qing, SI Yuehang, LIU Zhong. Survey of Research on Knowledge Graph Based on Pre-trained Language Models [J]. Computer Science, 2025, 52(1): 1-33.
[9] DUN Jingbo, LI Zhuo. Survey on Transmission Optimization Technologies for Federated Large Language Model Training [J]. Computer Science, 2025, 52(1): 42-55.
[10] ZHENG Mingqi, CHEN Xiaohui, LIU Bing, ZHANG Bing, ZHANG Ran. Survey of Chain-of-Thought Generation and Enhancement Methods in Prompt Learning [J]. Computer Science, 2025, 52(1): 56-64.
[11] LI Tingting, WANG Qi, WANG Jiakang, XU Yongjun. SWARM-LLM:An Unmanned Swarm Task Planning System Based on Large Language Models [J]. Computer Science, 2025, 52(1): 72-79.
[12] YAN Yusong, ZHOU Yuan, WANG Cong, KONG Shengqi, WANG Quan, LI Minne, WANG Zhiyuan. COA Generation Based on Pre-trained Large Language Models [J]. Computer Science, 2025, 52(1): 80-86.
[13] CHENG Zhiyu, CHEN Xinglin, WANG Jing, ZHOU Zhongyuan, ZHANG Zhizheng. Retrieval-augmented Generative Intelligence Question Answering Technology Based on Knowledge Graph [J]. Computer Science, 2025, 52(1): 87-93.
[14] LIU Changcheng, SANG Lei, LI Wei, ZHANG Yiwen. Large Language Model Driven Multi-relational Knowledge Graph Completion Method [J]. Computer Science, 2025, 52(1): 94-101.
[15] LIU Yumeng, ZHAO Yijing, WANG Bicong, WANG Chao, ZHANG Baomin. Advances in SQL Intelligent Synthesis Technology [J]. Computer Science, 2024, 51(7): 40-48.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!