Learning Saliency From Fixations

Yasser Abdelaziz Dahou Djilali, Kevin McGuinness, Noel O’Connor; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 383-393

Abstract


We present a novel approach for saliency prediction in images, leveraging parallel decoding in transformers to learn saliency solely from fixation maps. Models typically rely on continuous saliency maps, to overcome the difficulty of optimizing for the discrete fixation map. We attempt to replicate the experimental setup that generates saliency datasets. Our approach treats saliency prediction as a direct set prediction problem, via a global loss that enforces unique fixations prediction through bipartite matching and a transformer encoder-decoder architecture. By utilizing a fixed set of learned fixation queries, the cross-attention reasons over the image features to directly output the fixation points, distinguishing it from other modern saliency predictors. Our approach, named Saliency TRansformer (SalTR) achieves remarkable results on the Salicon benchmark.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Djilali_2024_WACV, author = {Djilali, Yasser Abdelaziz Dahou and McGuinness, Kevin and O{\textquoteright}Connor, Noel}, title = {Learning Saliency From Fixations}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {383-393} }