Compensating for Motion during Direct-Global Separation

Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1481-1488

Abstract


Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to be performed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

Related Material


[pdf]
[bibtex]
@InProceedings{Achar_2013_ICCV,
author = {Achar, Supreeth and Nuske, Stephen T. and Narasimhan, Srinivasa G.},
title = {Compensating for Motion during Direct-Global Separation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}