Towards Efficient Instance Segmentation with Hierarchical Distillation

Ziwei Deng, Quan Kong, Tomokazu Murakami; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0


Recently, instance segmentation models have been developed to a great promising accuracy on public benchmarks. However, these models are too heavy to be applied for real applications due to their low inference speed. In this paper, we propose a faster instance segmentation model utilizing a teacher-student learning framework that transfers the knowledge obtained by a well-trained teacher model to a lightweight student model. In addition to the conventional strategy of knowledge distillation in classification or semantic segmentation networks which are both single-task networks, we investigate a hierarchical distillation (H-Dis) framework for structure information distillation on multi-task learning based instance segmentation. H-Dis consists of two distillation schemes: representation distillation that distills pair-wise quantized feature maps shared by multi-heads, and semantic distillation that makes sure to distill each head information in an instance level. In particular, we present channel-wise distillation for the segmentation head to achieve instance-level mask knowledge transfer. To evaluate our approach, we carry out experiments with different settings of distillation methods on different datasets Pascal VOC and Cityscapes. Our experiments prove that our approach is effective for accelerating instance segmentation models with less accuracy drop under limited computing resources.

Related Material

author = {Deng, Ziwei and Kong, Quan and Murakami, Tomokazu},
title = {Towards Efficient Instance Segmentation with Hierarchical Distillation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}