GSD: View-Guided Gaussian Splatting Diffusion for 3D Reconstruction

ECCV 2024
Yuxuan Mu1, Xinxin Zuo2, Chuan Guo1, Yilin Wang1, Juwei Lu2, Xiaofeng Wu2, Songcen Xu2, Peng Dai2, Youliang Yan2, Li Cheng1
1University of Alberta  2Noah's Ark Lab, Huawei Canada

We present GSD, a diffusion model approach based on Gaussian Splatting (GS) representation for 3D object reconstruction from a single view. Prior works suffer from inconsistent 3D geometry or mediocre rendering quality due to improper representations. We take a step towards resolving these shortcomings by utilizing the recent state-of-the-art 3D explicit representation, Gaussian Splatting, and an unconditional diffusion model. This model learns to generate 3D objects represented by sets of GS ellipsoids. With these strong generative 3D priors, though learning unconditionally, the diffusion model is ready for view-guided reconstruction without further model fine-tuning. This is achieved by propagating fine-grained 2D features through the efficient yet flexible splatting function and the guided denoising sampling process. In addition, a 2D diffusion model is further employed to enhance rendering fidelity, and improve reconstructed GS quality by polishing and re-using the rendered images. The final reconstructed objects explicitly come with high-quality 3D structure and texture, and can be efficiently rendered in arbitrary views. Experiments on the challenging real-world CO3D dataset demonstrate the superiority of our approach.

Approach Overview

View Consistency from 3D GS Modeling

3D Geometry from 3D GS Modeling

Real-World Single View Reconstruction on CO3D

Improvement from Additional Observations

Scaling-up Results on OmniObject3d


      title={GSD: View-Guided Gaussian Splatting Diffusion for 3D Reconstruction},
      author={Mu, Yuxuan and Zuo, Xinxin and Guo, Chuan and Wang, Yilin and Lu, Juwei and Wu, Xiaofeng and Xu, Songcen and Dai, Peng and Yan, Youliang and Cheng, Li},
      journal={arXiv preprint arXiv:2407.04237},