HAHA - a novel approach for animatable human avatar generation from monocular input videos.
The proposed method relies on learning the trade-off between the use of Gaussian splatting and a textured mesh for efficient and high fidelity rendering. this paper demonstrate its efficiency to animate and render full-body human avatars controlled via the SMPL-X parametric model. Our model learns to apply Gaussian splatting only in areas of the SMPL-X mesh where it is necessary, like hair and out-of-mesh clothing. This results in a minimal number of Gaussians being used to represent the full avatar, and reduced rendering artifacts. This allows us to handle the animation of small body parts such as fingers that are traditionally disregarded.
this paper demonstrate the effectiveness of our approach on two open datasets: SnapshotPeople and X-Humans. Our method demonstrates on par reconstruction quality to the state-of-the-art on SnapshotPeople, while using less than a third of Gaussians. HAHA outperforms previous state-of-the-art on novel poses from X-Humans both quantitatively and qualitatively.
https://www.youtube.com/watch?v=vBzdAOKi1w0
- First stage: Gaussian avatar training.
- Second stage: RGB texture training.
- The use of the differentiabe rasterizer lets us back-propagate to the avatar’s parameters.
We optimize only the texture keeping SMPL-X’s parameters frozen during the whole stage.
we apply LTV in the texture space instead of the image space as we aim to reduce texture artifacts.
- The use of the differentiabe rasterizer lets us back-propagate to the avatar’s parameters.
'IT > paper report' 카테고리의 다른 글
ORPO (0) | 2024.05.16 |
---|---|
StreamingT2V (0) | 2024.04.11 |
Shap-E (0) | 2024.04.03 |
Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance (0) | 2024.03.28 |
Mixture-of-Experts (0) | 2024.03.21 |