-
[pdf]
[supp]
[bibtex]@InProceedings{Rodriguez-Puigvert_2023_ICCV, author = {Rodr{\'\i}guez-Puigvert, Javier and Batlle, V{\'\i}ctor M. and Montiel, J.M.M. and Martinez-Cantin, Ruben and Fua, Pascal and Tard\'os, Juan D. and Civera, Javier}, title = {LightDepth: Single-View Depth Self-Supervision from Illumination Decline}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {21273-21283} }
LightDepth: Single-View Depth Self-Supervision from Illumination Decline
Abstract
Single-view depth estimation can be remarkably effective if there is enough ground-truth depth data for supervised training. However, there are scenarios, especially in medicine in the case of endoscopies, where such data cannot be obtained. In such cases, multi-view self-supervision and synthetic-to-real transfer serve as alternative approaches, however, with a considerable performance reduction in comparison to supervised case.
Instead, we propose a single-view self-supervised method that achieves a performance similar to the supervised case. In some medical devices, such as endoscopes, the camera and light sources are co-located at a small distance from the target surfaces. Thus, we can exploit that, for any given albedo and surface orientation, pixel brightness is inversely proportional to the square of the distance to the surface, providing a strong single-view self-supervisory signal. In our experiments, our self-supervised models deliver accuracies comparable to those of fully supervised ones, while being applicable without depth ground-truth data.
Related Material