Describing and Localizing Multiple Changes With Transformers

Yue Qiu, Shintaro Yamamoto, Kodai Nakashima, Ryota Suzuki, Kenji Iwata, Hirokatsu Kataoka, Yutaka Satoh; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1971-1980

Abstract


Existing change captioning studies have mainly focused on a single change. However, detecting and describing multiple changed parts in image pairs is essential for enhancing adaptability to complex scenarios. We solve the above issues from three aspects: (i) We propose a simulation-based multi-change captioning dataset; (ii) We benchmark existing state-of-the-art methods of single change captioning on multi-change captioning; (iii) We further propose Multi-Change Captioning transformers (MCCFormers) that identify change regions by densely correlating different regions in image pairs and dynamically determines the related change regions with words in sentences. The proposed method obtained the highest scores on four conventional change captioning evaluation metrics for multi-change captioning. Additionally, our proposed method can separate attention maps for each change and performs well with respect to change localization. Moreover, the proposed framework outperformed the previous state-of-the-art methods on an existing change captioning benchmark, CLEVR-Change, by a large margin (+6.1 on BLEU-4 and +9.7 on CIDEr scores), indicating its general ability in change captioning tasks. The code and dataset are available at the project page.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Qiu_2021_ICCV, author = {Qiu, Yue and Yamamoto, Shintaro and Nakashima, Kodai and Suzuki, Ryota and Iwata, Kenji and Kataoka, Hirokatsu and Satoh, Yutaka}, title = {Describing and Localizing Multiple Changes With Transformers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {1971-1980} }