Learning Biomimetic Perception for Human Sensorimotor Control

Masaki Nakada, Honglin Chen, Demetri Terzopoulos; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 1917-1922

Abstract


We introduce a biomimetic simulation framework for human perception and sensorimotor control. Our framework features a biomechanically simulated musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas contain spatially nonuniform distributions of photoreceptors. Its prototype sensorimotor system comprises a set of 20 automatically-trained deep neural networks (DNNs), half of which comprise the neuromuscular motor control subsystem, whereas the other half are devoted to the visual perception subsystem. Directly from the photoreceptor responses, 2 perception DNNs control eye and head movements, while 8 DNNs extract the perceptual information needed to control the arms and legs. Thus, driven exclusively by its egocentric, active visual perception, our virtual human is capable of learning efficient, online visuomotor control of its eyes, head, and four limbs to perform a nontrivial task involving the foveation and visual persuit of a moving target object coupled with visuallyguided reaching actions to intercept the incoming target.

Related Material


[pdf]
[bibtex]
@InProceedings{Nakada_2018_CVPR_Workshops,
author = {Nakada, Masaki and Chen, Honglin and Terzopoulos, Demetri},
title = {Learning Biomimetic Perception for Human Sensorimotor Control},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}