Point-Based Modeling of Human Clothing

Ilya Zakharkin, Kirill Mazur, Artur Grigorev, Victor Lempitsky; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14718-14727

Abstract


We propose a new approach to human clothing modeling based on point clouds. Within this approach, we learn a deep model that can predict point clouds of various outfits, for various human poses, and for various human body shapes. Notably, outfits of various types and topologies can be handled by the same model. Using the learned model, we can infer the geometry of new outfits from as little as a single image, and perform outfit retargeting to new bodies in new poses. We complement our geometric model with appearance modeling that uses the point cloud geometry as a geometric scaffolding and employs neural point-based graphics to capture outfit appearance from videos and to re-render the captured outfits. We validate both geometric modeling and appearance modeling aspects of the proposed approach against recently proposed methods and establish the viability of point-based clothing modeling.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zakharkin_2021_ICCV, author = {Zakharkin, Ilya and Mazur, Kirill and Grigorev, Artur and Lempitsky, Victor}, title = {Point-Based Modeling of Human Clothing}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14718-14727} }