-
[pdf]
[supp]
[bibtex]@InProceedings{Logothetis_2025_WACV, author = {Logothetis, Fotios and Budvytis, Ignas and Cipolla, Roberto}, title = {NPL-MVPS: Neural Point-Light Multi-View Photometric Stereo}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {2291-2300} }
NPL-MVPS: Neural Point-Light Multi-View Photometric Stereo
Abstract
In this work we present a novel multi-view photometric stereo (MVPS) method. Like many works in 3D reconstruction we are leveraging neural shape representations and learnt renderers. However our work differs from the state-of-the-art multi-view PS methods such as PS-NeRF or Supernormal in that we explicitly leverage per-pixel intensity renderings rather than relying mainly on estimated normals. We model point light attenuation and explicitly raytrace cast shadows in order to best approximate the incoming radiance for each point. The estimated incoming radiance is used as input to a fully neural material renderer that uses minimal prior assumptions and it is jointly optimised with the surface. Estimated normals and segmentation maps are also incorporated in order to maximise the surface accuracy. Our method is among the first (along with Supernormal) to outperform the classical MVPS approach proposed by the DiLiGenT-MV benchmark and achieves average 0.2mm Chamfer distance for objects imaged at approx 1.5m distance away with approximate 400x400 resolution. Moreover our method shows high robustness to the sparse MVPS setup (6 views 6 lights) greatly outperforming the SOTA competitor (0.38mm vs 0.61mm) illustrating the importance of neural rendering in multi-view photometric stereo.
Related Material