Investigating Transformers in the Decomposition of Polygonal Shapes As Point Collections

Andrea Alfieri, Yancong Lin, Jan C. van Gemert; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 2076-2085

Abstract


Transformers can generate predictions in two approaches: 1. auto-regressively by conditioning each sequence element on the previous ones, or 2. directly produce an output sequences in parallel. While research has mostly explored upon this difference on sequential tasks in NLP, we study the difference between auto-regressive and parallel prediction on visual set prediction tasks, and in particular on polygonal shapes in images because polygons are representative of numerous types of objects, such as buildings or obstacles for aerial vehicles. This is challenging for deep learning architectures as a polygon can consist of a varying carnality of points. We provide evidence on the importance of natural orders for Transformers, and show the benefit of decomposing complex polygons into collections of points in an auto-regressive manner.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Alfieri_2021_ICCV, author = {Alfieri, Andrea and Lin, Yancong and van Gemert, Jan C.}, title = {Investigating Transformers in the Decomposition of Polygonal Shapes As Point Collections}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {2076-2085} }