Learning-Based Cloth Material Recovery From Video

Shan Yang, Junbang Liang, Ming C. Lin; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4383-4393

Abstract


Image understanding enables better reconstruction of the physical world from images and videos. Existing methods focus largely on geometry and visual appearance of the reconstructed scene. In this paper, we extend the frontier in image understanding and present a new technique to recover the material properties of cloth from a video.Previous cloth material recovery methods often require markers or complex experimental set-up to acquire physical properties, or are limited to certain types of images/videos. Our approach takes advantages of the appearance changes of the moving cloth to infer its physical properties. To extract information about the cloth, our method characterizes both the motion space and the visual appearance of the cloth geometry. We apply the Convolutional Neural Network (CNN) and the Long Short Term Memory (LSTM) neural network to material recovery of cloth properties from videos. We also exploit simulated data to help statistical learning of mapping between the visual appearance and motion dynamics of the cloth. The effectiveness of our method is demonstrated via validation using simulated datasets and real-life recorded videos.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2017_ICCV,
author = {Yang, Shan and Liang, Junbang and Lin, Ming C.},
title = {Learning-Based Cloth Material Recovery From Video},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}