Multi-Modal Bifurcated Network for Depth Guided Image Relighting

Hao-Hsiang Yang, Wei-Ting Chen, Hao-Lun Luo, Sy-Yen Kuo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 260-267

Abstract


Image relighting aims to recalibrate the illumination setting in an image. In this paper, we propose a deep learning-based method called multi-modal bifurcated network (MBNet) for depth guided image relighting. That is, given an image and the corresponding depth maps, a new image with the given illuminant angle and color temperature is generated by our network. This model extracts the image and the depth features by the bifurcated network in the encoder. To use the two features effectively, we adopt the dynamic dilated pyramid modules in the decoder. Moreover, to increase the variety of training data, we propose a novel data process pipeline to increase the number of the training data. Experiments conducted on the VIDIT dataset show that the proposed solution obtains the 1st place in terms of SSIM and PMS in the NTIRE 2021 Depth Guide One-to-one Relighting Challenge.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Yang_2021_CVPR, author = {Yang, Hao-Hsiang and Chen, Wei-Ting and Luo, Hao-Lun and Kuo, Sy-Yen}, title = {Multi-Modal Bifurcated Network for Depth Guided Image Relighting}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {260-267} }