Building Vision Transformers with Hierarchy Aware Feature Aggregation

Yongjie Chen, Hongmin Liu, Haoran Yin, Bin Fan; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 5908-5918

Abstract


Thanks to the excellent global modeling capability of attention mechanisms, the Vision Transformer has achieved better results than ConvNet in many computer tasks. However, in generating hierarchical feature maps, the Transformer still adopts the ConvNet feature aggregation scheme. This leads to the problem that the semantic information of the grid area of image becomes confused after feature aggregation, making it difficult for attention to accurately model global relationships. To address this, we propose the Hierarchy Aware Feature Aggregation framework (HAFA). HAFA enhances the extraction of local features adaptively in shallow layers where semantic information is weak, while is able to aggregate patches with similar semantics in deep layers. The clear semantic information of the aggregated patches, enables the attention mechanism to more accurately model global information at the semantic level. Extensive experiments show that after using the HAFA framework, significant improvements have been achieved relative to the baseline models in image classification, object detection, and semantic segmentation tasks.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chen_2023_ICCV, author = {Chen, Yongjie and Liu, Hongmin and Yin, Haoran and Fan, Bin}, title = {Building Vision Transformers with Hierarchy Aware Feature Aggregation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {5908-5918} }