PVNN: A Neural Network Library for Photometric Vision

Ye Yu, William A. P. Smith; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 526-535

Abstract


In this paper we show how a differentiable, physics-based renderer suitable for photometric vision tasks can be implemented as layers in a deep neural network. The layers include geometric operations for representation transformations, reflectance evaluations with arbitrary numbers of light sources and statistical bidirectional reflectance distribution function (BRDF) models. We make an implementation of these layers available as a neural network library (pvnn) for Theano. The layers can be incorporated into any neural network architecture, allowing parts of the photometric image formation process to be explicitly modelled in a network that is trained end to end via backpropagation. As an exemplar application, we show how to train a network with encoder-decoder architecture that learns to estimate BRDF parameters from a single image in an unsupervised manner.

Related Material


[pdf]
[bibtex]
@InProceedings{Yu_2017_ICCV,
author = {Yu, Ye and Smith, William A. P.},
title = {PVNN: A Neural Network Library for Photometric Vision},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}