ASSIST: Personalized indoor navigation via multimodal sensors and high-level semantic information

Vishnu Nair, Manjekar Budhai, Greg Olmschenk, William H. Seiple, Zhigang Zhu; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Blind & visually impaired (BVI) individuals and those with Autism Spectrum Disorder (ASD) each face unique challenges in navigating unfamiliar indoor environments. In this paper, we propose an indoor positioning and navigation system that guides a user from point A to point B indoors with high accuracy while augmenting their situational awareness. This system has three major components: location recognition (a hybrid indoor localization app that uses Bluetooth Low Energy beacons and Google Tango to provide high accuracy), object recognition (a body-mounted camera to provide the user momentary situational awareness of objects and people), and semantic recognition (map-based annotations to alert the user of static environmental characteristics). This system also features personalized interfaces built upon the unique experiences that both BVI and ASD individuals have in indoor wayfinding and tailors its multimodal feedback to their needs. Here, the technical approach and implementation of this system are discussed, and the results of human subject tests with both BVI and ASD individuals are presented. In addition, we discuss and show the system’s user-centric interface and present points for future work and expansion.

Related Material


[pdf]
[bibtex]
@InProceedings{Nair_2018_ECCV_Workshops,
author = {Nair, Vishnu and Budhai, Manjekar and Olmschenk, Greg and Seiple, William H. and Zhu, Zhigang},
title = {ASSIST: Personalized indoor navigation via multimodal sensors and high-level semantic information},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}