-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Yoon_2022_ACCV, author = {Yoon, Donggeun and Park, Jinsun and Cho, Donghyeon}, title = {Lightweight Alpha Matting Network Using Distillation-Based Channel Pruning}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {1368-1384} }
Lightweight Alpha Matting Network Using Distillation-Based Channel Pruning
Abstract
Recently, alpha matting has received a lot of attention because of its usefulness in mobile applications such as selfies. Therefore, there has been a demand for a lightweight alpha matting model due to the limited computational resources of commercial portable devices. To this end, we suggest a distillation-based channel pruning method for the alpha matting networks. In the pruning step, we remove channels of a student network having fewer impacts on mimicking the knowledge of a teacher network. Then, the pruned lightweight student network is trained by the same distillation loss. A lightweight alpha matting model from the proposed method outperforms existing lightweight methods. To show superiority of our algorithm, we provide various quantitative and qualitative experiments with in-depth analyses. Furthermore, we demonstrate the versatility of the proposed distillation-based channel pruning method by applying it to semantic segmentation.
Related Material