Learning Complementary Maps for Light Field Salient Object Detection

Zeyu Xiao, Jiateng Shou, Zhiwei Xiong; Proceedings of the Asian Conference on Computer Vision (ACCV), 2024, pp. 4403-4421

Abstract


Light field imaging presents a promising avenue for advancing salient object detection (SOD). However, existing light field SOD (LFSOD) methods grapple with challenges related to effectively aggregating features from all-in-focus (AiF) images and focal slices. These methods often under-utilize the complementary nature of salient and non-saliency maps, leading to inaccurate predictions, particularly at fine boundaries. To tackle these limitations, in this paper, we introduce a novel method for LFSOD. Our method incorporates a Cross-Modality Aggregation (CMA) module at multiple levels, facilitating the efficient fusion of AiF image and focal slice features. This progressive aggregation capitalizes on global and local dependencies to harness implicit geometric information in an LF. Based on the observation that, salient regions and non-salient counterparts are complementary to each other, thus a better estimation on one side leads to an improved estimation on the other, and vice versa, we introduce the Complementary Saliency Map Generator (CSMG). The CSMG generates both saliency and non-saliency maps interactively to leverage the inherent complementary relationship between salient regions and their non-salient counterparts. Through extensive experiments conducted on benchmark datasets, we have demonstrated that our proposed method achieves superior performance in LFSOD.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Xiao_2024_ACCV, author = {Xiao, Zeyu and Shou, Jiateng and Xiong, Zhiwei}, title = {Learning Complementary Maps for Light Field Salient Object Detection}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {4403-4421} }