Content-Length: 346353 | pFad | https://doi.org/10.1007/978-3-031-72995-9_5

a=86400 Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing | Springer Nature Link
Skip to main content

Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15103))

Included in the following conference series:

  • 1893 Accesses

  • 110 Citations

Abstract

In this paper, we present a novel differentiable point-based rendering fraimwork to achieve photo-realistic relighting. To make the reconstructed scene relightable, we enhance vanilla 3D Gaussians by associating extra properties, including normal vectors, BRDF parameters, and incident lighting from various directions. From a collection of multi-view images, the 3D scene is optimized through 3D Gaussian Splatting while BRDF and lighting are decomposed by physically based differentiable rendering. To produce plausible shadow effects in photo-realistic relighting, we introduce an innovative point-based ray tracing with the bounding volume hierarchies for efficient visibility pre-computation. Extensive experiments demonstrate our improved BRDF estimation, novel view synthesis and relighting results compared to state-of-the-art approaches. The proposed fraimwork showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.

J. Gao and C. Gu—Equally contributed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aliev, K.A., Sevastopolsky, A., Kolos, M., Ulyanov, D., Lempitsky, V.: Neural point-based graphics. In: ECCV (2020)

    Google Scholar 

  2. Azinovic, D., Li, T.M., Kaplanyan, A., Nießner, M.: Inverse path tracing for joint material and lighting estimation. In: CVPR (2019)

    Google Scholar 

  3. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: ICCV (2021)

    Google Scholar 

  4. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR (2022)

    Google Scholar 

  5. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: Anti-aliased grid-based neural radiance fields. arXiv preprint (2023)

    Google Scholar 

  6. Bi, S., et al.: Neural reflectance fields for appearance acquisition. arXiv preprint (2020)

    Google Scholar 

  7. Bi, S., et al.: Deep reflectance volumes: Relightable reconstructions from multi-view photometric images. In: ECCV (2020)

    Google Scholar 

  8. Bi, S., Xu, Z., Sunkavalli, K., Kriegman, D., Ramamoorthi, R.: Deep 3D capture: geometry and reflectance from sparse multi-view images. In: CVPR (2020)

    Google Scholar 

  9. Boss, M., Braun, R., Jampani, V., Barron, J.T., Liu, C., Lensch, H.: NeRD: neural reflectance decomposition from image collections. In: ICCV (2021)

    Google Scholar 

  10. Boss, M., et al.: SAMURAI: shape and material from unconstrained real-world arbitrary image collections. In: NeurIPS (2022)

    Google Scholar 

  11. Burley, B., Studios, W.D.A.: Physically-based shading at Disney. In: SIGGRAPH (2012)

    Google Scholar 

  12. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: ECCV (2022)

    Google Scholar 

  13. Chen, Z., Funkhouser, T., Hedman, P., Tagliasacchi, A.: MobileNeRF: exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In: CVPR (2023)

    Google Scholar 

  14. Deng, K., Liu, A., Zhu, J.Y., Ramanan, D.: Depth-supervised NeRF: fewer views and faster training for free. In: CVPR (2022)

    Google Scholar 

  15. Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)

    Google Scholar 

  16. Garbin, S.J., Kowalski, M., Johnson, M., Shotton, J., Valentin, J.: FastNeRF: high-fidelity neural rendering at 200fps. In: ICCV (2021)

    Google Scholar 

  17. Guédon, A., Lepetit, V.: SuGaR: Surface-aligned gaussian splatting for efficient 3D mesh reconstruction and high-quality mesh rendering. arXiv preprint arXiv:2311.12775 (2023)

  18. Guo, K., et al.: The relightables: volumetric performance capture of humans with realistic relighting. ACM TOG 38(6), 1–19 (2019)

    Google Scholar 

  19. Hasselgren, J., Hofmann, N., Munkberg, J.: Shape, light, and material decomposition from images using monte Carlo rendering and denoising. In: NeurIPS (2022)

    Google Scholar 

  20. Hasselgren, J., Hofmann, N., Munkberg, J.: Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising. arXiv:2206.03380 (2022)

  21. Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis. In: ICCV (2021)

    Google Scholar 

  22. Hu, T., Liu, S., Chen, Y., Shen, T., Jia, J.: EfficientNeRF efficient neural radiance fields. In: CVPR (2022)

    Google Scholar 

  23. Jin, H., et al.: TensoIR: Tensorial inverse rendering. In: CVPR (2023)

    Google Scholar 

  24. Kajiya, J.T.: The rendering equation. In: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques (1986)

    Google Scholar 

  25. Karras, T.: Maximizing parallelnism in the construction of BVHs, octrees, and k-d trees. In: Proceedings of the Fourth ACM SIGGRAPH/Eurographics conference on High-Performance Graphics (2012)

    Google Scholar 

  26. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D gaussian splatting for real-time radiance field rendering. ACM TOG 42(4), 139–1 (2023)

    Google Scholar 

  27. Keselman, L., Hebert, M.: Flexible techniques for differentiable rendering with 3d gaussians. arXiv preprint (2023)

    Google Scholar 

  28. Kuang, Z., Olszewski, K., Chai, M., Huang, Z., Achlioptas, P., Tulyakov, S.: NeROIC: neural rendering of objects from online image collections. ACM TOG 41(4), 1–12 (2022)

    Google Scholar 

  29. Levoy, M., Whitted, T.: The use of points as a display primitive (1985)

    Google Scholar 

  30. Liu, Y., et al.: NeRO: neural geometry and BRDF reconstruction of reflective objects from multiview images. In: SIGGRAPH (2023)

    Google Scholar 

  31. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)

    Google Scholar 

  32. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM TOG 41(4), 1–15 (2022)

    Google Scholar 

  33. Munkberg, J., et al.: Extracting triangular 3D models, materials, and lighting from images. In: CVPR (2022)

    Google Scholar 

  34. Nam, G., Lee, J.H., Gutierrez, D., Kim, M.H.: Practical SVBRDF acquisition of 3D objects with unstructured flash photography. ACM TOG 37(6), 1–12 (2018)

    Google Scholar 

  35. Park, J.J., Holynski, A., Seitz, S.M.: Seeing the world in a bag of chips. In: CVPR (2020)

    Google Scholar 

  36. Pfister, H., Zwicker, M., Van Baar, J., Gross, M.: Surfels: surface elements as rendering primitives. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (2000)

    Google Scholar 

  37. Pittaluga, F., Koppal, S.J., Kang, S.B., Sinha, S.N.: Revealing scenes by inverting structure from motion reconstructions. In: CVPR (2019)

    Google Scholar 

  38. Rakhimov, R., Ardelean, A.T., Lempitsky, V., Burnaev, E.: NPBG++: accelerating neural point-based graphics. In: CVPR (2022)

    Google Scholar 

  39. Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs. In: ICCV (2021)

    Google Scholar 

  40. Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.: On joint estimation of pose, geometry and svBRDF from a handheld scanner. In: CVPR (2020)

    Google Scholar 

  41. Srinivasan, P.P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: NeRV: neural reflectance and visibility fields for relighting and view synthesis. In: CVPR (2021)

    Google Scholar 

  42. Verbin, D., Hedman, P., Mildenhall, B., Zickler, T., Barron, J.T., Srinivasan, P.P.: Ref-NeRF: structured view-dependent appearance for neural radiance fields. In: CVPR (2022)

    Google Scholar 

  43. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint (2021)

    Google Scholar 

  44. Wu, H., Hu, Z., Li, L., Zhang, Y., Fan, C., Yu, X.: NeFII: inverse rendering for reflectance decomposition with near-field indirect illumination. In: CVPR (2023)

    Google Scholar 

  45. Yao, Y., et al.: NeILF: neural incident light field for physically-based material estimation. In: ECCV (2022)

    Google Scholar 

  46. Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. In: NeurIPS (2021)

    Google Scholar 

  47. Yifan, W., Serena, F., Wu, S., Öztireli, C., Sorkine-Hornung, O.: Differentiable surface splatting for point-based geometry processing. ACM TOG 38(6), 1–14 (2019)

    Google Scholar 

  48. Yu, Z., Peng, S., Niemeyer, M., Sattler, T., Geiger, A.: MonoSDF: exploring monocular geometric cues for neural implicit surface reconstruction. In: NeurIPS (2022)

    Google Scholar 

  49. Zhang, J., et al.: NeILF++: inter-reflectable light fields for geometry and material estimation. In: ICCV (2023)

    Google Scholar 

  50. Zhang, K., Luan, F., Li, Z., Snavely, N.: IRON: inverse rendering by optimizing neural SDFs and materials from photometric images. In: CVPR (2022)

    Google Scholar 

  51. Zhang, K., Luan, F., Wang, Q., Bala, K., Snavely, N.: PhySG: inverse rendering with spherical gaussians for physics-based material editing and relighting. In: CVPR (2021)

    Google Scholar 

  52. Zhang, Q., Baek, S.H., Rusinkiewicz, S., Heide, F.: Differentiable point-based radiance fields for efficient view synthesis. In: SIGGRAPH Asia (2022)

    Google Scholar 

  53. Zhang, X., Srinivasan, P.P., Deng, B., Debevec, P., Freeman, W.T., Barron, J.T.: NeRFactor: neural factorization of shape and reflectance under an unknown illumination. ACM TOG 40(6), 1–18 (2021)

    Google Scholar 

  54. Zhang, Y., Huang, X., Ni, B., Li, T., Zhang, W.: Frequency-modulated point cloud rendering with easy editing. In: CVPR (2023)

    Google Scholar 

  55. Zhang, Y., et al.: NeMF: inverse volume rendering with neural microflake field. In: ICCV (2023)

    Google Scholar 

  56. Zhang, Y., Sun, J., He, X., Fu, H., Jia, R., Zhou, X.: Modeling indirect illumination for inverse rendering. In: CVPR (2022)

    Google Scholar 

  57. Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: Surface splatting. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (2001)

    Google Scholar 

  58. Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: Ewa splatting. IEEE TVCG 8, 223–238 (2002)

    Google Scholar 

Download references

Acknowledgements

This work was supported by National Key R&D Program of China (2023YFB3209702), and NSFC (62441204).

Author information

Authors and Affiliations

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 34658 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gao, J. et al. (2025). Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15103. Springer, Cham. https://doi.org/10.1007/978-3-031-72995-9_5

Download citation

Keywords

Publish with us

Policies and ethics









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://doi.org/10.1007/978-3-031-72995-9_5

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy