Depth and Motion Cues With Phosphene Patterns for Prosthetic Vision

Alejandro Perez-Yus, Jesus Bermudez-Cameo, Gonzalo Lopez-Nicolas, Jose J. Guerrero; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1516-1525

Abstract


Recent research demonstrates that visual prostheses are able to provide visual perception to people with some kind of blindness. In visual prostheses, image information from the scene is transformed to a phosphene pattern to be sent to the implant. This is a complex problem where the main challenge is the very limited spatial and intensity resolution. Moreover, depth perception, which is relevant to perform agile navigation, is lost and codifying the semantic information to phosphene patterns remains an open problem. In this work, we consider the framework of perception for navigation where aspects such as obstacle avoidance are critical. We propose using a head-mounted RGB-D camera to detect free-space, obstacles and scene direction in front of the user. The main contribution is a new approach to represent depth information and provide motion cues by using particular phosphene patterns. The effectiveness of this approach is tested in simulation with real data from indoor environments.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Perez-Yus_2017_ICCV,
author = {Perez-Yus, Alejandro and Bermudez-Cameo, Jesus and Lopez-Nicolas, Gonzalo and Guerrero, Jose J.},
title = {Depth and Motion Cues With Phosphene Patterns for Prosthetic Vision},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}