-
[pdf]
[arXiv]
[bibtex]@InProceedings{Yang_2021_CVPR, author = {Yang, Ceyuan and Wu, Zhirong and Zhou, Bolei and Lin, Stephen}, title = {Instance Localization for Self-Supervised Detection Pretraining}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {3987-3996} }
Instance Localization for Self-Supervised Detection Pretraining
Abstract
Prior research on self-supervised learning has led to considerable progress on image classification, but often with degraded transfer performance on object detection. The objective of this paper is to advance self-supervised pretrained models specifically for object detection. Based on the inherent difference between classification and detection, we propose a new self-supervised pretext task, called instance localization. Image instances are pasted at various locations and scales onto background images. The pretext task is to predict the instance category given the composited images as well as the foreground bounding boxes. We show that integration of bounding boxes into pretraining promotes better alignment between convolutional features and region boxes. In addition, we propose an augmentation method on the bounding boxes to further enhance this feature alignment. As a result, our model becomes weaker at Imagenet semantic classification but stronger at image patch localization, with an overall stronger pretrained model for object detection. Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection on PASCAL VOC and MSCOCO.
Related Material