Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation

Ryan Szeto, Jason J. Corso; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1595-1604

Abstract


We motivate and address a human-in-the-loop variant of the monocular viewpoint estimation task in which the location and class of one semantic object keypoint is available at test time. In order to leverage the keypoint information, we devise a Convolutional Neural Network called Click-Here CNN (CH-CNN) that integrates the keypoint information with activations from the layers that process the image. It transforms the keypoint information into a 2D map that can be used to weigh features from certain parts of the image more heavily. The weighted sum of these spatial features is combined with global image features to provide relevant information to the prediction layers. To train our network, we collect a novel dataset of 3D keypoint annotations on thousands of CAD models, and synthetically render millions of images with 2D keypoint information. On test instances from PASCAL 3D+, our model achieves a mean class accuracy of 90.7%, whereas the state-of-the-art baseline only obtains 85.7% mean class accuracy, justifying our argument for human-in-the-loop inference.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Szeto_2017_ICCV,
author = {Szeto, Ryan and Corso, Jason J.},
title = {Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}