⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验
2025-01-18 更新
DehazeGS: Seeing Through Fog with 3D Gaussian Splatting
Authors:Jinze Yu, Yiqun Wang, Zhengda Lu, Jianwei Guo, Yong Li, Hongxing Qin, Xiaopeng Zhang
Current novel view synthesis tasks primarily rely on high-quality and clear images. However, in foggy scenes, scattering and attenuation can significantly degrade the reconstruction and rendering quality. Although NeRF-based dehazing reconstruction algorithms have been developed, their use of deep fully connected neural networks and per-ray sampling strategies leads to high computational costs. Moreover, NeRF’s implicit representation struggles to recover fine details from hazy scenes. In contrast, recent advancements in 3D Gaussian Splatting achieve high-quality 3D scene reconstruction by explicitly modeling point clouds into 3D Gaussians. In this paper, we propose leveraging the explicit Gaussian representation to explain the foggy image formation process through a physically accurate forward rendering process. We introduce DehazeGS, a method capable of decomposing and rendering a fog-free background from participating media using only muti-view foggy images as input. We model the transmission within each Gaussian distribution to simulate the formation of fog. During this process, we jointly learn the atmospheric light and scattering coefficient while optimizing the Gaussian representation of the hazy scene. In the inference stage, we eliminate the effects of scattering and attenuation on the Gaussians and directly project them onto a 2D plane to obtain a clear view. Experiments on both synthetic and real-world foggy datasets demonstrate that DehazeGS achieves state-of-the-art performance in terms of both rendering quality and computational efficiency. visualizations are available at https://dehazegs.github.io/
当前的新型视图合成任务主要依赖于高质量、清晰的图像。然而,在雾天场景中,散射和衰减会显著影响重建和渲染质量。尽管已经开发了基于NeRF的去雾重建算法,但它们使用深度全连接神经网络和每射线采样策略,导致计算成本高昂。此外,NeRF的隐式表示很难从雾蒙蒙的场景中恢复细节。相比之下,3D高斯涂斑技术的最新进展通过显式建模点云为3D高斯实现了高质量的三维场景重建。在本文中,我们提出利用显式高斯表示,通过物理准确的正向渲染过程来解释雾图像的形成过程。我们引入了DehazeGS方法,该方法能够仅使用多视角雾图像作为输入,对参与介质进行无雾背景的分解和渲染。我们模拟了高斯分布内的传输以模拟雾的形成。在此过程中,我们联合学习大气光和散射系数,同时优化雾场景的高斯表示。在推理阶段,我们消除了散射和衰减对高斯的影响,并将其直接投影到二维平面上以获得清晰视图。在合成和真实世界雾数据集上的实验表明,DehazeGS在渲染质量和计算效率方面都达到了最先进的性能。可视化效果请访问:[https://dehazegs.github.io/]
论文及项目相关链接
PDF 9 pages,4 figures
Summary
本文提出一种基于3D高斯展布(Gaussian Splatting)的去雾方法,称为DehazeGS。该方法利用显式高斯表示来模拟雾天图像的形成过程,并通过多视角雾图像的输入来分解和渲染无雾背景。通过优化高斯表示,联合学习大气光和散射系数,最后在推理阶段消除散射和衰减对高斯的影响,直接投影到2D平面获得清晰视图。实验表明,DehazeGS在渲染质量和计算效率方面达到领先水平。
Key Takeaways
- 当前视图合成任务主要依赖于高质量清晰图像,但在雾天场景中,散射和衰减会严重影响重建和渲染质量。
- NeRF的隐式表示在雾天场景的细节恢复上存在问题,而3D高斯展布可实现高质量3D场景重建。
- DehazeGS方法利用显式高斯表示模拟雾天图像形成过程,通过多视角雾图像分解和渲染无雾背景。
- 方法中通过优化高斯表示,联合学习大气光和散射系数。
- DehazeGS在去除散射和衰减影响后,能将高斯直接投影到2D平面获得清晰视图。
- 实验证明DehazeGS在渲染质量和计算效率方面达到领先水平。
点此查看论文截图
GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis
Authors:Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao
We propose a method that achieves state-of-the-art rendering quality and efficiency on monocular dynamic scene reconstruction using deformable 3D Gaussians. Implicit deformable representations commonly model motion with a canonical space and time-dependent backward-warping deformation field. Our method, GauFRe, uses a forward-warping deformation to explicitly model non-rigid transformations of scene geometry. Specifically, we propose a template set of 3D Gaussians residing in a canonical space, and a time-dependent forward-warping deformation field to model dynamic objects. Additionally, we tailor a 3D Gaussian-specific static component supported by an inductive bias-aware initialization approach which allows the deformation field to focus on moving scene regions, improving the rendering of complex real-world motion. The differentiable pipeline is optimized end-to-end with a self-supervised rendering loss. Experiments show our method achieves competitive results and higher efficiency than both previous state-of-the-art NeRF and Gaussian-based methods. For real-world scenes, GauFRe can train in ~20 mins and offer 96 FPS real-time rendering on an RTX 3090 GPU. Project website: https://lynl7130.github.io/gaufre/index.html
我们提出了一种在单目动态场景重建中实现最先进的渲染质量和效率的方法,该方法使用可变形三维高斯模型。隐式可变形表示通常使用规范空间和基于时间的逆向扭曲变形场来模拟运动。我们的方法GauFRe使用正向扭曲变形来显式模拟场景几何的非刚体变换。具体来说,我们提出了一个位于规范空间中的三维高斯模板集和一个基于时间的正向扭曲变形场来模拟动态物体。此外,我们针对三维高斯特定的静态组件进行定制,支持归纳偏见感知初始化方法,允许变形场专注于移动的场景区域,从而提高复杂现实运动的渲染效果。可微管道通过自我监督的渲染损失进行端到端的优化。实验表明,我们的方法与之前的先进NeRF和高斯方法相比,取得了具有竞争力的结果并提高了效率。对于现实场景,GauFRe可以在大约20分钟内进行训练,并在RTX 3090 GPU上提供96 FPS的实时渲染。项目网站:链接 。
论文及项目相关链接
PDF WACV 2025. 11 pages, 8 figures, 5 tables
Summary
本文提出一种基于可变形3D高斯的方法,用于单目动态场景重建,实现了最先进的渲染质量和效率。该方法使用正向变形场显式建模场景几何的非刚性变换,通过优化端到端的可微管道和自监督渲染损失,实现了复杂现实世界运动的高质量渲染。
Key Takeaways
- 提出了一种名为GauFRe的新方法,使用可变形3D高斯实现单目动态场景重建。
- 采用正向变形场建模场景的非刚性变换,改进了动态场景的渲染质量。
- 引入了3D高斯特定的静态组件,通过感应偏见感知初始化方法,使变形场专注于移动场景区域。
- 方法的可微管道和自监督渲染损失进行了端到端的优化。
- 与先前的先进NeRF和基于高斯的方法相比,该方法具有竞争力,并且效率更高。
- 对于真实世界场景,GauFRe可在约20分钟内进行训练,并在RTX 3090 GPU上实现96 FPS的实时渲染。