⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验
2025-10-04 更新
MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics
Authors:Changmin Lee, Jihyun Lee, Tae-Kyun Kim
While there has been significant progress in the field of 3D avatar creation from visual observations, modeling physically plausible dynamics of humans with loose garments remains a challenging problem. Although a few existing works address this problem by leveraging physical simulation, they suffer from limited accuracy or robustness to novel animation inputs. In this work, we present MPMAvatar, a framework for creating 3D human avatars from multi-view videos that supports highly realistic, robust animation, as well as photorealistic rendering from free viewpoints. For accurate and robust dynamics modeling, our key idea is to use a Material Point Method-based simulator, which we carefully tailor to model garments with complex deformations and contact with the underlying body by incorporating an anisotropic constitutive model and a novel collision handling algorithm. We combine this dynamics modeling scheme with our canonical avatar that can be rendered using 3D Gaussian Splatting with quasi-shadowing, enabling high-fidelity rendering for physically realistic animations. In our experiments, we demonstrate that MPMAvatar significantly outperforms the existing state-of-the-art physics-based avatar in terms of (1) dynamics modeling accuracy, (2) rendering accuracy, and (3) robustness and efficiency. Additionally, we present a novel application in which our avatar generalizes to unseen interactions in a zero-shot manner-which was not achievable with previous learning-based methods due to their limited simulation generalizability. Our project page is at: https://KAISTChangmin.github.io/MPMAvatar/
在基于视觉观察的3D化身创建领域取得了重大进展的同时,对穿着宽松服装的人类进行物理动态建模仍然是一个具有挑战性的问题。尽管一些现有工作通过利用物理仿真来解决这个问题,但它们在处理新颖的动画输入时存在精度有限或鲁棒性不足的问题。在这项工作中,我们提出了MPMAvatar框架,它可以从多视角视频创建3D人类化身,支持高度真实、稳健的动画以及从自由视角进行逼真的渲染。为了进行精确而稳健的动态建模,我们的关键思想是使用基于物质点方法的模拟器。我们精心定制了该模拟器,以通过引入各向异性本构模型和新型碰撞处理算法来对具有复杂变形和与基础身体的接触的服装进行建模。我们将这种动态建模方案与我们的规范化身相结合,该化身可以使用带有准阴影的3D高斯贴图进行渲染,从而实现高保真渲染以呈现物理真实的动画。在我们的实验中,我们证明了MPMAvatar在动力学建模精度、渲染精度以及鲁棒性和效率方面显著优于现有的基于物理的化身最新技术。此外,我们还展示了一项新颖的应用,即我们的化身能够以零射击方式推广到未见过的交互——这是以前基于学习的方法由于有限的模拟泛化能力而无法实现的。我们的项目页面为:[https://KAISTChangmin.github.io/MPMAvatar/]
论文及项目相关链接
PDF Accepted to NeurIPS 2025
Summary
基于多视角视频创建3D人类角色框架的研究取得了进展,但在建模人物角色的动态运动以及衣物的物理特性方面仍面临挑战。本文提出了一种名为MPMAvatar的框架,利用物质点法模拟技术,实现准确且稳健的动态建模,并支持从多角度拍摄的视频创建高度逼真的角色动画。该研究在动力学建模准确性、渲染准确性以及稳健性和效率方面显著优于现有技术。此外,该研究还展示了角色的零样本泛化能力,可在未见过的交互中表现良好。
Key Takeaways
- MPMAvatar框架实现了基于多视角视频的3D人类角色创建,支持高度逼真的动画。
- 该框架使用物质点法模拟技术实现精准稳健的动态建模。
- MPMAvatar对衣物和人体的复杂交互进行了准确的模拟。
- 与现有技术相比,MPMAvatar在动力学建模准确性、渲染准确性方面有所提升。
- MPMAvatar框架具备稳健性和高效率的特点。
- MPMAvatar角色展示出了良好的泛化能力,能够在未见过的交互场景中表现良好。
点此查看论文截图

