Video Demo: An Egocentric Vision Based Assistive Co-robot

Jingzhe Zhang, Lishuo Zhuang, Yang Wang, Yameng Zhou, Yan Meng, Gang Hua; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2013, pp. 48-49


We present the video demo of the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have the needed hand functionality for control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a prespecified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

Related Material

author = {Zhang, Jingzhe and Zhuang, Lishuo and Wang, Yang and Zhou, Yameng and Meng, Yan and Hua, Gang},
title = {Video Demo: An Egocentric Vision Based Assistive Co-robot},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2013}