计算机科学 ›› 2025, Vol. 52 ›› Issue (12): 141-149.doi: 10.11896/jsjkx.250400075

• 计算机图形学&多媒体 • 上一篇    下一篇

基于外观增强和语义分割的神经辐射场

曹明伟, 黄宝龙, 赵海峰   

  1. 安徽大学计算机科学与技术学院 合肥 230601
  • 收稿日期:2025-04-15 修回日期:2025-08-27 出版日期:2025-12-15 发布日期:2025-12-09
  • 通讯作者: 曹明伟(caomw@hfut.edu.cn)
  • 基金资助:
    安徽省高校科研项目(2024AH050045);国家自然科学基金(62372153)

Appearance Enhancement and Semantic Segmentation-based Neural Radiance Fields

CAO Mingwei, HUANG Baolong, ZHAO Haifeng   

  1. School of Computer Science and Technology, Anhui University, Hefei 230601, China
  • Received:2025-04-15 Revised:2025-08-27 Published:2025-12-15 Online:2025-12-09
  • About author:CAO Mingwei,born in 1986,Ph.D,associate professor,master supervisor,is a member of CCF(No.49221M).His main research interests include 3D reconstruction and computer vision.
  • Supported by:
    This work was supported by the Anhui Province University Research Project(2024AH050045) and National Natural Science Foundation of China(62372153).

摘要: 神经辐射场(Neural Radiance Fields,NeRFs)因其高效的场景建模和表达能力,已经成为视图合成和渲染领域的重要基础方法。然而,在动态环境中,NeRF在应对复杂光照变化和瞬态对象干扰方面仍存在挑战。由于光照条件发生变化,因此场景中存在大量不一致外观,进而影响视图合成质量。同时,场景中的动态干扰影响了合成图像的真实感。针对上述问题,提出了一种基于外观增强和语义分割的神经辐射场(Appearance Enhancement and Semantic Segmentation-based Neural Radiance Fields,AS-NeRF)。该方法通过结合锥形体采样与集成位置编码机制,提高外观特征的融合效率,增强模型对光照和相机参数变化的适应能力,从而提升渲染结果的色彩一致性和真实感。此外,采用端到端的轻量级分割网络预测瞬态可视性掩模,有效分离动态对象,避免瞬态元素对合成图像质量的影响。为了验证该方法的有效性,在Photo Tourism数据集上进行了实验,并与多种现有方法进行定性与定量对比分析,实验结果表明所提出的方法在合成精度上优于现有经典方法,并进一步验证了分割掩模在瞬态物体分离中的准确性。

关键词: 神经辐射场, 视图合成, 神经渲染, 三维重建

Abstract: The accelerated advancement of deep learning has notably propelled 3D reconstruction techniques within the field of computer vision.NeRFs have become an essential methodology due to their adeptness at scene modeling and superior view synthesis.However,challenges persist in dynamic environments,particularly in managing intricate lighting variations and transient object interference.Alterations in imaging conditions may lead to inconsistent scene appearances,thereby degrading the quality of view synthesis.Concurrently,dynamic elements can adversely affect the photorealism of reconstructed scenes.To mitigate these issues,this paper introduces an AS-NeRF.By incorporating frequency regularization and composite positional encoding into the sampling strategy,AS-NeRF enhances the efficiency of appearance feature fusion,thereby augmenting the model’s adaptability to variations in lighting and camera parameters.This subsequently improves color consistency and overall rendering realism.Additionally,a lightweight segmentation network is utilized to predict transient visibility masks in an end-to-end manner,effectively isolating dynamic objects and reducing their impact on view synthesis quality.The efficacy of AS-NeRF is verified through experiments conducted on the Photo Tourism datasets,which are compared qualitatively and quantitatively with several existingme-thods.The experimental results demonstrate that AS-NeRF surpasses existing approaches in terms of synthesis accuracy and further confirms the accuracy of the computed segmentation masks in distinguishing transient objects.

Key words: Neural radiance fields, View synthesis, Neural rendering, 3D reconstruction

中图分类号: 

  • TP391.41
[1]MILDENHALL B,SRINIVASAN P P,TANCIK M,et al.NeRF:Representing Scenes as Neural Radiance Fields for View Synthesis[J].Communications of the ACM,2021,65(1):99-106.
[2]MARTIN-BRUALLA R,RADWAN N,SAJJADI M S M,et al.NeRF in the Wild:Neural Radiance Fields for Unconstrained Photo Collections[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2021:7210-7219.
[3]CHEN X,ZHANG Q,LI X,et al.Hallucinated Neural Radiance Fields in the Wild[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2022:12943-12952.
[4]WANG S,XU H,LI Y,et al.IE-NeRF:Exploring TransientMask Inpainting to Enhance Neural Radiance Fields in the Wild[J].Neurocomputing,2025,618:129112.
[5]LEE J,KIM I,HEO H,et al.Semantic-Aware Occlusion Filtering Neural Radiance Fields in the Wild[J].arXiv:2303.03966,2023.
[6]SNAVELY N,SEITZ S,SZELISKI R.Photo Tourism:Explo-ring Photo Collections in 3D[J].ACM Transactions on Grap-hics,2006,25(3):835-846.
[7]CHAURASIA G,DUCHENE S,SORKINE-HORNUNG O,et al.Depth Synthesis and Local Warps for Plausible Image-Based Navigation[J].ACM Transactions on Graphics,2013,32(3):1-12.
[8]FITZGIBBON A,WEXLER Y,ZISSERMAN A.Image-BasedRendering Using Image-Based Priors[J].International Journal of Computer Vision,2005,63:141-151.
[9]LEVOY M,HANRAHAN P.Light Field Rendering [C]//Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques.1996:31-42.
[10]CHENG C M,LIN S J,LAI S H,et al.Improved Novel View Synthesis from Depth Image with Large Baseline[C]//Procee-dings of the 19th International Conference on Pattern Recognition.IEEE,2008:1-4.
[11]DEBEVEC P E,TAYLOR C J,MALIK J.Modeling and Rendering Architecture from Photographs:A Hybrid Geometry-and Image-Based Approach [C]//Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques.1996:11-20.
[12]BUEHLER C,BOSSE M,MCMILLAN L,et al.Unstructured Lumigraph Rendering[C]//Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques.2001:425-432.
[13]RIEGLER G,KOLTUN V.Free View Synthesis[C]//Procee-dings of the 16th European Conference Computer Vision.2020:623-640.
[14]FLYNN J,NEULANDER I,PHILBIN J,et al.Deepstereo:Learning to Predict New Views from The World’s Imagery[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:5515-5524.
[15]GROSSMAN J P,DALLY W J.Point Sample Rendering[C]//Rendering Techniques’ 98:Proceedings of the Eurographics Workshop in Vienna.1998:181-192.
[16]ZWICKER M,PFISTER H,VAN BAAR J,et al.Surface Splatting[C]//Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques.2001:371-378.
[17]DEBEVEC P,YU Y,BORSHUKOV G.Efficient View-Depen-dent Image-Based Rendering with Projective Texture-Mapping[C]//Rendering Techniques’ 98:Proceedings of the Eurogra-phics Workshop in Vienna.1998:105-116.
[18] SHAN Q,ADAMS R,CURLESS B,et al.The Visual TuringTest for Scene Reconstruction[C]//Proceedings of the IEEE International Conference on 3D Vision.2013:25-32.
[19]KUTULAKOS K N,SEITZ S M.A Theory of Shape by Space Carving[J].International Journal of Computer Vision,2000,38:199-218.
[20]PENNER E,ZHANG L.Soft 3d Reconstruction for View Synthesis[J].ACM Transactions on Graphics,2017,36(6):1-11.
[21]AKENINE-MOLLER T,HAINES E,HOFFMAN N.Real-Time Rendering[M].AK Peters/crc Press,2019.
[22]COOK R L,PORTER T,CARPENTER L.Distributed RayTracing[C]//Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques.1984:137-145.
[23]PURCELL T J,BUCK I,MARK W R,et al.Ray Tracing on Programmable Graphics Hardware[J].ACM Transactions on Graphics,2002,21(3):703-712.
[24]TATARCHENKO M,DOSOVITSKIY A,BROX T.Multi-View 3d Models from Single Images With a Convolutional Network[C]//Proceedings of the 14th European Conference on Computer Vision.2016:322-337.
[25]HEDMAN P,PHILIP J,PRICE T,et al.Deep Blending forFree-Viewpoint Image-Based Rendering[J].ACM Transactions on Graphics,2018,37(6):1-15.
[26]KO J,CHO K,CHOI D,et al.3d Gan Inversion with Pose Optimization[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2023:2967-2976.
[27]LIU L,GU J,LIN K Z,et al.Neural Sparse Voxel Fields[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems.2020:15651-15663.
[28]MÜLLER T,EVANS A,SCHIED C,et al.Instant NeuralGraphics Primitives with A Multiresolution Hash Encoding[J].ACM Transactions on Graphics,2022,41(4):1-15.
[29]FRIDOVICH-KEIL S,YU A,TANCIK M,et al.Plenoxels:Radiance Fields without Neural Networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:5501-5510.
[30]SUCAR E,LIU S,ORTIZ J,et al.IMAP:Implicit Mapping and Positioning in Real-Time[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:6229-6238.
[31]SUN C,SUN M,CHEN H T.Direct Voxel Grid Optimization:Super-Fast Convergence for Radiance Fields Reconstruction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:5459-5469.
[32]CHIBANE J,BANSAL A,LAZOVA V,et al.Stereo Radiance Fields(Srf):Learning View Synthesis for Sparse Views of Novel Scenes[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:7911-7920.
[33]CHEN A,XU Z,ZHAO F,et al.MVSNeRF:Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:14124-14133.
[34]YU A,YE V,TANCIK M,et al.PixelNeRF:Neural Radiance Fields from One or Few Images[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:4578-4587.
[35]KIM M,SEO S,HAN B.InfoNeRF:Ray Entropy Minimization for Few-Shot Neural Volume Rendering[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:12912-12921.
[36]WANG Z,BAGAUTDINOV T,LOMBARDI S,et al.Learning Compositional Radiance Fields of Dynamic Human Heads[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:5704-5713.
[37]WANG X B,ZHANG H,GAO H,et al.Talking Portrait Synthesis Method Based on Regional Saliency and Spatial Feature Extraction [J].Computer Science,2025,52(3):58-67.
[38]PARK J J,FLORENCE P,STRAUB J,et al.DeepSDF:Learning Continuous Signed Distance Functions for Shape Representation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:165-174.
[39]MESCHEDER L,OECHSLE M,NIEMEYER M,et al.Occu-pancy Networks:Learning 3d Reconstruction in Function Space[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:4460-4470.
[40]SAITO S,HUANG Z,NATSUME R,et al.Pifu:Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2019:2304-2314.
[41]SITZMANN V,MARTEL J,BERGMAN A,et al.Implicit Neural Representations with Periodic Activation Functions[C]//Proceedings of the 33th International Conference on Neural Information Processing Systems(NeurlPS).2020:7462-7473.
[42]LOMBARDI S,SIMON T,SARAGIH J M,et al.Neural Volumes[J].ACM Transactions on Graphics,2019,38:1-14.
[43]BARRON J T,MILDENHALL B,VERBIN D,et al.Mip-NeRF 360:Unbounded Anti-Aliased Neural Radiance Fields[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:5470-5479.
[44]BARRON J T,MILDENHALL B,TANCIK M,et al.Mip-Nerf:A Multiscale Representation for Anti-Aliasing Neural Radiance Fields[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2021:5855-5864.
[45]WU T,TANG S,ZHANG R,et al.CGNet:A Light-WeightContext Guided Network for Semantic Segmentation[J].IEEE Transactions on Image Processing,2020,30:1169-1179.
[46]MAO Q,LEE H Y,TSENG H Y,et al.Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:1429-1437.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!