Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation

Qiqi Hou, Feng Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4130-4139

Abstract


Natural image matting is an important problem in computer vision and graphics. It is an ill-posed problem when only an input image is available without any external information. While the recent deep learning approaches have shown promising results, they only estimate the alpha matte. This paper presents a context-aware natural image matting method for simultaneous foreground and alpha matte estimation. Our method employs two encoder networks to extract essential information for matting. Particularly, we use a matting encoder to learn local features and a context encoder to obtain more global context information. We concatenate the outputs from these two encoders and feed them into decoder networks to simultaneously estimate the foreground and alpha matte. To train this whole deep neural network, we employ both the standard Laplacian loss and the feature loss: the former helps to achieve high numerical performance while the latter leads to more perceptually plausible results. We also report several data augmentation strategies that greatly improve the network's generalization performance. Our qualitative and quantitative experiments show that our method enables high-quality matting for a single natural image.

Related Material


[pdf]
[bibtex]
@InProceedings{Hou_2019_ICCV,
author = {Hou, Qiqi and Liu, Feng},
title = {Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}