-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhang_2023_WACV, author = {Zhang, Liyun and Ratsamee, Photchara and Wang, Bowen and Luo, Zhaojie and Uranishi, Yuki and Higashida, Manabu and Takemura, Haruo}, title = {Panoptic-Aware Image-to-Image Translation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {259-268} }
Panoptic-Aware Image-to-Image Translation
Abstract
Despite remarkable progress in image translation, the complex scene with multiple discrepant objects remains a challenging problem. The translated images have low fidelity and tiny objects in fewer details causing unsatisfactory performance in object recognition. Without thorough object perception (i.e., bounding boxes, categories, and masks) of images as prior knowledge, the style transformation of each object will be difficult to track in translation. We propose panoptic-aware generative adversarial networks (PanopticGAN) for image-to-image translation together with a compact panoptic segmentation dataset. The panoptic perception (i.e., foreground instances and background semantics of the image scene) is extracted to achieve alignment between object content codes of the input domain and panoptic-level style codes sampled from the target style space, then refined by a proposed feature masking module for sharping object boundaries. The image-level combination between content and sampled style codes is also merged for higher fidelity image generation. Our proposed method was systematically compared with different competing methods and obtained significant improvement in both image quality and object recognition performance.
Related Material