Semantic Pose Using Deep Networks Trained on Synthetic RGB-D

Jeremie Papon, Markus Schoeler; The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 774-782


In this work we address the problem of indoor scene understanding from RGB-D images. Specifically, we propose to find instances of common furniture classes, their spatial extent, and their pose with respect to generalized class models. To accomplish this, we use a deep, wide, multi-output convolutional neural network (CNN) that predicts class, pose, and location of possible objects simultaneously. To overcome the lack of large annotated RGB-D training sets (especially those with pose), we use an on-the-fly rendering pipeline that generates realistic cluttered room scenes in parallel to training. We then perform transfer learning on the relatively small amount of publicly available annotated RGB-D data, and find that our model is able to successfully annotate even highly challenging real scenes. Importantly, our trained network is able to understand noisy and sparse observations of highly cluttered scenes with a remarkable degree of accuracy, inferring class and pose from a very limited set of cues. Additionally, our neural network is only moderately deep and computes class, pose and position in tandem, so the overall run-time is significantly faster than existing methods, estimating all output parameters simultaneously in parallel.

Related Material

author = {Papon, Jeremie and Schoeler, Markus},
title = {Semantic Pose Using Deep Networks Trained on Synthetic RGB-D},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}