SIGNet: Semantic Instance Aided Unsupervised 3D Geometry Perception

Yue Meng, Yongxi Lu, Aman Raj, Samuel Sunarjo, Rui Guo, Tara Javidi, Gaurav Bansal, Dinesh Bharadia; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9810-9820


Unsupervised learning for geometric perception (depth, optical flow, etc.) is of great interest to autonomous systems. Recent works on unsupervised learning have made considerable progress on perceiving geometry; however, they usually ignore the coherence of objects and perform poorly under scenarios with dark and noisy environments. In contrast, supervised learning algorithms, which are robust, require large labeled geometric dataset. This paper introduces SIGNet, a novel framework that provides robust geometry perception without requiring geometrically informative labels. Specifically, SIGNet integrates semantic information to make depth and flow predictions consistent with objects and robust to low lighting conditions. SIGNet is shown to improve upon the state-of-the-art unsupervised learning for depth prediction by 30% (in squared relative error). In particular, SIGNet improves the dynamic object class performance by 39% in depth prediction and 29% in flow prediction. Our code will be made available at

Related Material

[pdf] [supp]
author = {Meng, Yue and Lu, Yongxi and Raj, Aman and Sunarjo, Samuel and Guo, Rui and Javidi, Tara and Bansal, Gaurav and Bharadia, Dinesh},
title = {SIGNet: Semantic Instance Aided Unsupervised 3D Geometry Perception},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}