HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution

Shu-Chuan Chu, Zhi-Chao Dou, Jeng-Shyang Pan, Shaowei Weng, Junbao Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 6257-6266

Abstract


Transformer-based methods have demonstrated excellent performance on super-resolution visual tasks surpassing conventional convolutional neural networks. However existing work typically restricts self-attention computation to non-overlapping windows to save computational costs. This means that Transformer-based networks can only use input information from a limited spatial range. Therefore a novel Hybrid Multi-Axis Aggregation network (HMA) is proposed in this paper to exploit feature potential information better. HMA is constructed by stacking Residual Hybrid Transformer Blocks(RHTB) and Grid Attention Blocks(GAB). On the one side RHTB combines channel attention and self-attention to enhance non-local feature fusion and produce more attractive visual results. Conversely GAB is used in cross-domain information interaction to jointly model similar features and obtain a larger perceptual field. For the super-resolution task in the training phase a novel pre-training method is designed to enhance the model representation capabilities further and validate the proposed model's effectiveness through many experiments. The experimental results show that HMA outperforms the state-of-the-art methods on the benchmark dataset. The source code and trained model will be made publicly available upon acceptance of the manuscript.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chu_2024_CVPR, author = {Chu, Shu-Chuan and Dou, Zhi-Chao and Pan, Jeng-Shyang and Weng, Shaowei and Li, Junbao}, title = {HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {6257-6266} }