MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics

KAIST
NeurIPS 2025

MPMAvatar, a framework for creating 3D Gaussian avatars from multi-view videos, that support physically accurate and robust animations, especially for loose garments. Ours is also zero-shot generalizable to novel scene interactions.

Abstract

While there has been significant progress in the field of 3D avatar creation from visual observations, modeling physically plausible dynamics of humans with loose garments remains a challenging problem. Although a few existing works address this problem by leveraging physical simulation, they suffer from limited accuracy or robustness to novel animation inputs. In this work, we present MPMAvatar, a framework for creating 3D human avatars from multi-view videos that supports highly realistic, robust animation, as well as photorealistic rendering from free viewpoints. For accurate and robust dynamics modeling, our key idea is to use a Material Point Method-based simulator, which we carefully tailor to model garments with complex deformations and contact with the underlying body by incorporating an anisotropic constitutive model and a novel collision handling algorithm. We combine this dynamics modeling scheme with our canonical avatar that can be rendered using 3D Gaussian Splatting with quasi-shadowing, enabling high-fidelity rendering for physically realistic animations. In our experiments, we demonstrate that MPMAvatar significantly outperforms the existing state-of-the-art physics-based avatar in terms of (1) dynamics modeling accuracy, (2) rendering accuracy, and (3) robustness and efficiency. Additionally, we present a novel application in which our avatar generalizes to unseen interactions in a zero-shot manner—which was not achievable with previous learning-based methods due to their limited simulation generalizability.

Overview of our dynamic avatar modeling

We hybridly represent our canonical avatar with (1) a mesh with physical parameters for geometry and dynamics modeling, and (2) 3D Gaussian Splats for appearance modeling. This avatar can be animated via linear blend skinning for non-garment regions and physical simulation for garment regions with our novel collision handling algorithm.
Visualization key. Blue arrows indicate body grid velocities, green arrows denote garment grid velocities, and red arrows show colliding grid regions where velocity projection is applied.

Novel Pose Driving Results

Zero-Shot Scene Interactions Results

Supplementary Video

BibTeX

@inproceedings{lee2025mpmavatar,
  title={MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics},
  author={Lee, Changmin and Lee, Jihyun and Kim, Tae-Kyun},
  booktitle={NeurIPS},
  year={2025}
}