Deep Photometric Stereo Network

Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 501-509

Abstract


This paper presents a photometric stereo method based on deep learning. One of the major difficulties in photometric stereo is designing an appropriate reflectance model that is both capable of representing real-world reflectances and computationally tractable in terms of deriving surface normal. Unlike previous photometric stereo methods that rely on a simplified parametric image formation model, such as the Lambert's model, the proposed method aims at establishing a flexible mapping between complex reflectance observations and surface normal by the use of a deep neural network. As a result we propose a deep photometric stereo network (DPSN) that takes reflectance observations under varying light directions and infers the corresponding surface normal per pixel. To make the DPSN applicable to real-world objects, a database of measured bidirectional reflectance distribution functions (MERL BRDF database) has been used for training the network. Evaluation using simulation and real-world scenes shows effectiveness of the proposed approach over previous techniques.

Related Material


[pdf]
[bibtex]
@InProceedings{Santo_2017_ICCV,
author = {Santo, Hiroaki and Samejima, Masaki and Sugano, Yusuke and Shi, Boxin and Matsushita, Yasuyuki},
title = {Deep Photometric Stereo Network},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}