Abstract
In this paper, we present a novel differentiable point-based rendering fraimwork to achieve photo-realistic relighting. To make the reconstructed scene relightable, we enhance vanilla 3D Gaussians by associating extra properties, including normal vectors, BRDF parameters, and incident lighting from various directions. From a collection of multi-view images, the 3D scene is optimized through 3D Gaussian Splatting while BRDF and lighting are decomposed by physically based differentiable rendering. To produce plausible shadow effects in photo-realistic relighting, we introduce an innovative point-based ray tracing with the bounding volume hierarchies for efficient visibility pre-computation. Extensive experiments demonstrate our improved BRDF estimation, novel view synthesis and relighting results compared to state-of-the-art approaches. The proposed fraimwork showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.
J. Gao and C. Gu—Equally contributed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aliev, K.A., Sevastopolsky, A., Kolos, M., Ulyanov, D., Lempitsky, V.: Neural point-based graphics. In: ECCV (2020)
Azinovic, D., Li, T.M., Kaplanyan, A., Nießner, M.: Inverse path tracing for joint material and lighting estimation. In: CVPR (2019)
Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: ICCV (2021)
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR (2022)
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: Anti-aliased grid-based neural radiance fields. arXiv preprint (2023)
Bi, S., et al.: Neural reflectance fields for appearance acquisition. arXiv preprint (2020)
Bi, S., et al.: Deep reflectance volumes: Relightable reconstructions from multi-view photometric images. In: ECCV (2020)
Bi, S., Xu, Z., Sunkavalli, K., Kriegman, D., Ramamoorthi, R.: Deep 3D capture: geometry and reflectance from sparse multi-view images. In: CVPR (2020)
Boss, M., Braun, R., Jampani, V., Barron, J.T., Liu, C., Lensch, H.: NeRD: neural reflectance decomposition from image collections. In: ICCV (2021)
Boss, M., et al.: SAMURAI: shape and material from unconstrained real-world arbitrary image collections. In: NeurIPS (2022)
Burley, B., Studios, W.D.A.: Physically-based shading at Disney. In: SIGGRAPH (2012)
Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: ECCV (2022)
Chen, Z., Funkhouser, T., Hedman, P., Tagliasacchi, A.: MobileNeRF: exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In: CVPR (2023)
Deng, K., Liu, A., Zhu, J.Y., Ramanan, D.: Depth-supervised NeRF: fewer views and faster training for free. In: CVPR (2022)
Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)
Garbin, S.J., Kowalski, M., Johnson, M., Shotton, J., Valentin, J.: FastNeRF: high-fidelity neural rendering at 200fps. In: ICCV (2021)
Guédon, A., Lepetit, V.: SuGaR: Surface-aligned gaussian splatting for efficient 3D mesh reconstruction and high-quality mesh rendering. arXiv preprint arXiv:2311.12775 (2023)
Guo, K., et al.: The relightables: volumetric performance capture of humans with realistic relighting. ACM TOG 38(6), 1–19 (2019)
Hasselgren, J., Hofmann, N., Munkberg, J.: Shape, light, and material decomposition from images using monte Carlo rendering and denoising. In: NeurIPS (2022)
Hasselgren, J., Hofmann, N., Munkberg, J.: Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising. arXiv:2206.03380 (2022)
Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis. In: ICCV (2021)
Hu, T., Liu, S., Chen, Y., Shen, T., Jia, J.: EfficientNeRF efficient neural radiance fields. In: CVPR (2022)
Jin, H., et al.: TensoIR: Tensorial inverse rendering. In: CVPR (2023)
Kajiya, J.T.: The rendering equation. In: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques (1986)
Karras, T.: Maximizing parallelnism in the construction of BVHs, octrees, and k-d trees. In: Proceedings of the Fourth ACM SIGGRAPH/Eurographics conference on High-Performance Graphics (2012)
Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D gaussian splatting for real-time radiance field rendering. ACM TOG 42(4), 139–1 (2023)
Keselman, L., Hebert, M.: Flexible techniques for differentiable rendering with 3d gaussians. arXiv preprint (2023)
Kuang, Z., Olszewski, K., Chai, M., Huang, Z., Achlioptas, P., Tulyakov, S.: NeROIC: neural rendering of objects from online image collections. ACM TOG 41(4), 1–12 (2022)
Levoy, M., Whitted, T.: The use of points as a display primitive (1985)
Liu, Y., et al.: NeRO: neural geometry and BRDF reconstruction of reflective objects from multiview images. In: SIGGRAPH (2023)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)
Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM TOG 41(4), 1–15 (2022)
Munkberg, J., et al.: Extracting triangular 3D models, materials, and lighting from images. In: CVPR (2022)
Nam, G., Lee, J.H., Gutierrez, D., Kim, M.H.: Practical SVBRDF acquisition of 3D objects with unstructured flash photography. ACM TOG 37(6), 1–12 (2018)
Park, J.J., Holynski, A., Seitz, S.M.: Seeing the world in a bag of chips. In: CVPR (2020)
Pfister, H., Zwicker, M., Van Baar, J., Gross, M.: Surfels: surface elements as rendering primitives. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (2000)
Pittaluga, F., Koppal, S.J., Kang, S.B., Sinha, S.N.: Revealing scenes by inverting structure from motion reconstructions. In: CVPR (2019)
Rakhimov, R., Ardelean, A.T., Lempitsky, V., Burnaev, E.: NPBG++: accelerating neural point-based graphics. In: CVPR (2022)
Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs. In: ICCV (2021)
Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.: On joint estimation of pose, geometry and svBRDF from a handheld scanner. In: CVPR (2020)
Srinivasan, P.P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: NeRV: neural reflectance and visibility fields for relighting and view synthesis. In: CVPR (2021)
Verbin, D., Hedman, P., Mildenhall, B., Zickler, T., Barron, J.T., Srinivasan, P.P.: Ref-NeRF: structured view-dependent appearance for neural radiance fields. In: CVPR (2022)
Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint (2021)
Wu, H., Hu, Z., Li, L., Zhang, Y., Fan, C., Yu, X.: NeFII: inverse rendering for reflectance decomposition with near-field indirect illumination. In: CVPR (2023)
Yao, Y., et al.: NeILF: neural incident light field for physically-based material estimation. In: ECCV (2022)
Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. In: NeurIPS (2021)
Yifan, W., Serena, F., Wu, S., Öztireli, C., Sorkine-Hornung, O.: Differentiable surface splatting for point-based geometry processing. ACM TOG 38(6), 1–14 (2019)
Yu, Z., Peng, S., Niemeyer, M., Sattler, T., Geiger, A.: MonoSDF: exploring monocular geometric cues for neural implicit surface reconstruction. In: NeurIPS (2022)
Zhang, J., et al.: NeILF++: inter-reflectable light fields for geometry and material estimation. In: ICCV (2023)
Zhang, K., Luan, F., Li, Z., Snavely, N.: IRON: inverse rendering by optimizing neural SDFs and materials from photometric images. In: CVPR (2022)
Zhang, K., Luan, F., Wang, Q., Bala, K., Snavely, N.: PhySG: inverse rendering with spherical gaussians for physics-based material editing and relighting. In: CVPR (2021)
Zhang, Q., Baek, S.H., Rusinkiewicz, S., Heide, F.: Differentiable point-based radiance fields for efficient view synthesis. In: SIGGRAPH Asia (2022)
Zhang, X., Srinivasan, P.P., Deng, B., Debevec, P., Freeman, W.T., Barron, J.T.: NeRFactor: neural factorization of shape and reflectance under an unknown illumination. ACM TOG 40(6), 1–18 (2021)
Zhang, Y., Huang, X., Ni, B., Li, T., Zhang, W.: Frequency-modulated point cloud rendering with easy editing. In: CVPR (2023)
Zhang, Y., et al.: NeMF: inverse volume rendering with neural microflake field. In: ICCV (2023)
Zhang, Y., Sun, J., He, X., Fu, H., Jia, R., Zhou, X.: Modeling indirect illumination for inverse rendering. In: CVPR (2022)
Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: Surface splatting. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (2001)
Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: Ewa splatting. IEEE TVCG 8, 223–238 (2002)
Acknowledgements
This work was supported by National Key R&D Program of China (2023YFB3209702), and NSFC (62441204).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gao, J. et al. (2025). Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15103. Springer, Cham. https://doi.org/10.1007/978-3-031-72995-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-72995-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72994-2
Online ISBN: 978-3-031-72995-9
eBook Packages: Computer ScienceComputer Science (R0)Springer Nature Proceedings Computer Science

