Learning-Based Spotlight Position Optimization for Non-Line-of-Sight Human Localization and Posture Classification

Sreenithy Chandran, Tatsuya Yatagawa, Hiroyuki Kubo, Suren Jayasuriya; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 4218-4227

Abstract


Non-line-of-sight imaging (NLOS) is the process of estimating information about a scene that is hidden from the direct line of sight of the camera. NLOS imaging typically requires time-resolved detectors and a laser source for illumination, which are both expensive and computationally intensive to handle. In this paper, we propose an NLOS-based localization and posture classification technique that works on a system of an off-the-shelf projector and camera. We leverage a message-passing neural network to learn a scene geometry and predict the best position to be spotlighted by the projector that can maximize the NLOS signal. The training of the neural network is performed in an end-to-end manner. Therefore, the ground truth spotlighted position is unnecessary during the training, and the network parameters are optimized to maximize the NLOS performance. Unlike prior deep-learning-based NLOS techniques that assume planar relay walls, our system allows us to handle line-of-sight scenes where scene geometries are more arbitrary. Our method demonstrates state-of-the-art performance in object localization and position classification using both synthetic and real scenes.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chandran_2024_WACV, author = {Chandran, Sreenithy and Yatagawa, Tatsuya and Kubo, Hiroyuki and Jayasuriya, Suren}, title = {Learning-Based Spotlight Position Optimization for Non-Line-of-Sight Human Localization and Posture Classification}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {4218-4227} }