DrapeNet: Garment Generation and Self-Supervised Draping

Luca De Luigi, Ren Li, Benoît Guillard, Mathieu Salzmann, Pascal Fua; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1451-1460

Abstract


Recent approaches to drape garments quickly over arbitrary human bodies leverage self-supervision to eliminate the need for large training sets. However, they are designed to train one network per clothing item, which severely limits their generalization abilities. In our work, we rely on self-supervision to train a single network to drape multiple garments. This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network, which models garments as unsigned distance fields. Our pipeline can generate and drape previously unseen garments of any topology, whose shape can be edited by manipulating their latent codes. Being fully differentiable, our formulation makes it possible to recover accurate 3D models of garments from partial observations -- images or 3D scans -- via gradient descent. Our code is publicly available at https://github.com/liren2515/DrapeNet.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{De_Luigi_2023_CVPR, author = {De Luigi, Luca and Li, Ren and Guillard, Beno{\^\i}t and Salzmann, Mathieu and Fua, Pascal}, title = {DrapeNet: Garment Generation and Self-Supervised Draping}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {1451-1460} }