Video Salient Object Detection via Contrastive Features and Attention Modules

Yi-Wen Chen, Xiaojie Jin, Xiaohui Shen, Ming-Hsuan Yang; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 1320-1329

Abstract


Video salient object detection aims to find the most visually distinct objects in a video. To explore the temporal dependencies, existing methods usually resort to recurrent neural networks or optical flow. However, these approaches require high computational cost, and tend to accumulate inaccuracies over time. In this paper, we propose a network with attention modules to learn contrastive features for video salient object detection without the high computational temporal modeling techniques. We develop a non-local self-attention scheme to capture the global information in the video frame. A co-attention formulation is utilized to combine the low-level and high-level features. We further apply the contrastive learning to improve the feature representations, where foreground region pairs from the same video are pulled together, and foreground-background region pairs are pushed away in the latent space. The intra-frame contrastive loss helps separate the foreground and background features, and the inter-frame contrastive loss improves the temporal consistency. We conduct extensive experiments on several benchmark datasets for video salient object detection and unsupervised video object segmentation, and show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2022_WACV, author = {Chen, Yi-Wen and Jin, Xiaojie and Shen, Xiaohui and Yang, Ming-Hsuan}, title = {Video Salient Object Detection via Contrastive Features and Attention Modules}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1320-1329} }