SDNet: An Extremely Efficient Portrait Matting Model via Self-Distillation

Ziwen Li, Bo Xu, Jiake Xie, Yong Tang, Cheng Lu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 5625-5634

Abstract


Most existing portrait matting models either require expensive auxiliary information or try to decompose the task into sub-tasks that are usually resource-hungry. These challenges limit its application on low-power computing devices. In addition, mobile networks tend to be less powerful than those cumbersome ones in feature representation mining. In this paper, we propose an extremely efficient portrait matting model via self-distillation (SDNet), that aims to provide a solution to performing accurate and effective portrait matting with limited computing resources. Our SDNet contains only 2M parameters, 2.2% of the parameters of MGM, and 1.5% of that of Matteformer. We introduce the training pipeline of self-distillation that can improve our lightweight baseline model without any parameter addition, network modification, or over-parameterized teacher models which need well-pretraining. Extensive experiments demonstrate the effectiveness of our self-distillation method and the lightweight SDNet network. Our SDNet outperforms the state-of-the-art (SOTA) lightweight approaches on both synthetic and real-world images.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2024_WACV, author = {Li, Ziwen and Xu, Bo and Xie, Jiake and Tang, Yong and Lu, Cheng}, title = {SDNet: An Extremely Efficient Portrait Matting Model via Self-Distillation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {5625-5634} }