Factors in Finetuning Deep Model for Object Detection With Long-Tail Distribution

Wanli Ouyang, Xiaogang Wang, Cong Zhang, Xiaokang Yang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 864-873

Abstract


Finetuning from a pretrained deep model is found to yield state-of-the-art performance for many vision tasks. This paper investigates many factors that influence the performance in finetuning for object detection. There is a long-tailed distribution of sample numbers for classes in object detection. Our analysis and empirical results show that classes with more samples have higher impact on the feature learning. And it is better to make the sample number more uniform across classes. Generic object detection can be considered as multiple equally important tasks. Detection of each class is a task. These classes/tasks have their individuality in discriminative visual appearance representation. Taking this individuality into account, we cluster objects into visually similar class groups and learn deep representations for these groups separately. A hierarchical feature learning scheme is proposed. In this scheme, the knowledge from the group with large number of classes is transferred for learning features in its sub-groups. Finetuned on the GoogLeNet model, experimental results show 4.7% absolute mAP improvement of our approach on the ImageNet object detection dataset without increasing much computational cost at the testing stage.

Related Material


[pdf]
[bibtex]
@InProceedings{Ouyang_2016_CVPR,
author = {Ouyang, Wanli and Wang, Xiaogang and Zhang, Cong and Yang, Xiaokang},
title = {Factors in Finetuning Deep Model for Object Detection With Long-Tail Distribution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}