Semantically-Enriched 3D Models for Common-Sense Knowledge

Manolis Savva, Angel X. Chang, Pat Hanrahan; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015, pp. 24-31

Abstract


We identify and connect a set of physical properties to 3D models to create a richly-annotated 3D model dataset with data on physical sizes, static support, attachment surfaces, material compositions, and weights. To collect these physical property priors, we leverage observations of 3D models within 3D scenes and information from images and text. By augmenting 3D models with these properties we create a semantically rich, multi-layered dataset of common indoor objects. We demonstrate the usefulness of these annotations for improving 3D scene synthesis systems, enabling faceted semantic queries into 3D model datasets, and reasoning about how objects can be manipulated by people using weight and static friction estimates.

Related Material


[pdf]
[bibtex]
@InProceedings{Savva_2015_CVPR_Workshops,
author = {Savva, Manolis and Chang, Angel X. and Hanrahan, Pat},
title = {Semantically-Enriched 3D Models for Common-Sense Knowledge},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2015}
}