RTTLC: Video Colorization With Restored Transformer and Test-Time Local Converter

Jinjing Li, Qirong Liang, Qipei Li, Ruipeng Gang, Ji Fang, Chichen Lin, Shuang Feng, Xiaofeng Liu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 1722-1730

Abstract


Video colorization is a highly challenging and ill-posed problem that suffers from severe flickering artifacts and color distribution inconsistency. To resolve these issues, we propose a Restored Transformer and Test-time Local Converter network(RTTLC). Firstly, we introduce a Bi-directional Recurrent Block and a Learnable Guided Mask to our network. This leverages hidden knowledge from adjacent frames that include rich information about occlusion, resulting in significant enhancements in visual quality. Secondly, we integrate a Restored Transformer that enables the network to utilize more spatial contextual information and capture multi-scale information more accurately. Thirdly, during inference, we utilize the Test-time Local Converter(TLC) strategy to alleviate distribution shift and enhance the performance of the model. Experimental results show good performance of FID and CDC. Notably, RTTLC achieves second prize in both tracks of the NTIRE23 video colorization challenges.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2023_CVPR, author = {Li, Jinjing and Liang, Qirong and Li, Qipei and Gang, Ruipeng and Fang, Ji and Lin, Chichen and Feng, Shuang and Liu, Xiaofeng}, title = {RTTLC: Video Colorization With Restored Transformer and Test-Time Local Converter}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {1722-1730} }