Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People

Chayma Zatout, Slimane Larabi, Ilyes Mendili, Soedji Ablam Edoh Barnabe; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


This work is devoted to scene understanding and motion ability improvement for visually impaired and blind people. We investigate how to exploit egocentric vision to provide semantic labeling of scene from head-mounted depth camera. More specifically, we propose a new method for locating ground from depth image whatever the camera's pose. The rest of planes of the scene are located using RANSAC method, semantically coded by their attributes and mapped as cylinders into a generated 3D scene which will serve as a feedback to users. Experiments are conducted and the obtained results are discussed.

Related Material


[pdf]
[bibtex]
@InProceedings{Zatout_2019_ICCV,
author = {Zatout, Chayma and Larabi, Slimane and Mendili, Ilyes and Ablam Edoh Barnabe, Soedji},
title = {Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}