DiET-GS 🫨

Diffusion Prior and Event Stream-Assisted
Motion Deblurring 3D Gaussian Splatting

CVPR 2025

Department of Computer Science, National University of Singapore

We recommend you to read our paper in arXiv since it provides better quality of figures.

DiET-GS and DiET-GS++

Event stream

Blur images + 3DGS

DiET-GS++ (ours)


Our DiET-GS++ enables high quality novel-view synthesis with recovering precise color and well-defined details from the blurry multi-view images.

Abstract

Reconstructing sharp 3D representations from blurry multi-view images are long-standing problem in computer vision. Recent works attempt to enhance high-quality novel view synthesis from the motion blur by leveraging event-based cameras, benefiting from high dynamic range and microsecond temporal resolution. However, they often reach sub-optimal visual quality in either restoring inaccurate color or losing fine-grained details. In this paper, we present DiET-GS, a diffusion prior and event stream-assisted motion deblurring 3DGS. Our framework effectively leverages both blur-free event streams and diffusion prior in a two-stage training strategy. Specifically, we introduce the novel framework to constraint 3DGS with event double integral, achieving both accurate color and well-defined details. Additionally, we propose a simple technique to leverage diffusion prior to further enhance the edge details. Qualitative and quantitative results on both synthetic and real-world data demonstrate that our DiET-GS is capable of producing significantly better quality of novel views compared to the existing baselines.

Overall Framework


Overall framework of DiET-GS. Stage 1 (DiET-GS) optimizes the deblurring 3DGS with leveraging the event streams and diffusion prior. To preserve accurate color and clean details, we exploit EDI prior in multiple ways, including color supervision $C$, guidance for fine-grained details $I$ and additional regularization $\tilde{I}$ via EDI simulation. Stage 2 (DiET-GS++) is then employed to maximize the effect of diffusion prior with introducing extra learnable parameters $\mathbf{f}_{\mathbf{g}}$. DiEt-GS++ further refines the rendered images from DiET-GS, effectively enhancing rich edge features. More details are explained in Sec. 4.1 and Sec. 4.2. of the main paper.

Quantitative Results


Quantitative comparisons on novel-view synthetis with both synthetic and real-world dataset. The results are the average of every scenes within the dataset. The best results are in bold while the second best results are underscored. Our DiET-GS significantly outperforms existing baselines in PSNR, SSIM and LPIPS while our DiET-GS++ achieves the best scores in NR-IQA metrics such as MUSIQ and CLIP-IQA.

Quantitative comparisons on single image deblurring with real-world datasets. Our DiET-GS++ consistently outperforms all baselines in every 5 real-world scenes.

Qualitative results


DiET-GS shows cleaner texture with more accurate details compared to the event-based baselines while DiET-GS++ further enhances these features with sharper definition, achieving the best visual quality.

Blurry Image

EDI+GS

E2NeRF

Ev-DeblurNeRF

DiET-GS (Ours)

DiET-GS++ (Ours)

GT

Additional visualization resutls on novel view synthesis in real-world dataset (Suppl)

EDI+GS

E2NeRF

Ev-DeblurNeRF

DiET-GS++ (Ours)

GT

Additional visualization resutls on novel view synthesis in synthetic dataset (Suppl)

EDI+GS

E2NeRF

Ev-DeblurNeRF

DiET-GS++ (Ours)

GT

Here, we also present qualitative comparisons for single image deblurring. As shown in 2nd column, frame-based image deblurring method NAFNet often produces inaccurate details since it solely relies on blurry images to recover fine-grained details. EDI and BeNeRF recover more precise details, benefiting from the event-based cameras while severe artifacts are still exhibited. Our DiET-GS++ shows the best visual quality with cleaner and well-defined details by leveraging EDI and pretrained diffusion model as prior.

Blur Image

NAFNet

EDI

BeNeRF

DiET-GS++ (Ours)

BibTeX

@misc{lee2025dietgsdiffusionpriorevent,
      title={DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting}, 
      author={Seungjun Lee and Gim Hee Lee},
      year={2025},
      eprint={2503.24210},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.24210}, 
}