Layered-Garment Net: Generating Multiple Implicit Garment Layers from a Single Image

Alakh Aggarwal, Jikai Wang, Steven Hogue, Saifeng Ni, Madhukar Budagavi, Xiaohu Guo; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3000-3017

Abstract


Recent research works have focused on generating human models and garments from their 2D images. However, state-of-the-art researches focus either on only a single layer of the garment on a human model or on generating multiple garment layers without any guarantee of the intersection-free geometric relationship between them. In reality, people wear multiple layers of garments in their daily life, where an inner layer of garment could be partially covered by an outer one. In this paper, we try to address this multi-layer modeling problem and propose the Layered-Garment Net (LGN) that is capable of generating intersection-free multiple layers of garments defined by implicit function fields over the body surface, given the person's near front-view image. With a special design of garment indication fields (GIF), we can enforce an implicit covering relationship between the signed distance fields (SDF) of different layers to avoid self-intersections among different garment surfaces and the human body. Experiments demonstrate the strength of our proposed LGN framework in generating multi-layer garments as compared to state-of-the-art methods. To the best of our knowledge, LGN is the first research work to generate intersection-free multiple layers of garments on the human body from a single image.

Related Material


[pdf] [supp] [code]
[bibtex]
@InProceedings{Aggarwal_2022_ACCV, author = {Aggarwal, Alakh and Wang, Jikai and Hogue, Steven and Ni, Saifeng and Budagavi, Madhukar and Guo, Xiaohu}, title = {Layered-Garment Net: Generating Multiple Implicit Garment Layers from a Single Image}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3000-3017} }