DIG: Draping Implicit Garment over the Human Body

Ren Li, Benoit Guillard, Edoardo Remelli, Pascal Fua; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 2780-2795


Existing data-driven methods for draping garments over human bodies, despite being effective, cannot handle garments of arbitrary topology and are typically not end-to-end differentiable. To address these limitations, we propose an end-to-end differentiable pipeline that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model. To limit body-garment interpenetrations and artifacts, we propose an interpenetration-aware pre-processing strategy of training data and a novel training loss that penalizes self-intersections while draping garments. We demonstrate that our method yields more accurate results for garment reconstruction and deformation with respect to state of the art methods. Furthermore, we show that our method, thanks to its end-to-end differentiability, allows to recover body and garments parameters jointly from image observations, something that previous work could not do. Our code is available at https://github.com/liren2515/DIG.

Related Material

[pdf] [supp] [arXiv] [code]
@InProceedings{Li_2022_ACCV, author = {Li, Ren and Guillard, Benoit and Remelli, Edoardo and Fua, Pascal}, title = {DIG: Draping Implicit Garment over the Human Body}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {2780-2795} }