FcaNet: Frequency Channel Attention Networks

Zequn Qin, Pengyi Zhang, Fei Wu, Xi Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 783-792


Attention mechanism, especially channel attention, has gained great success in the computer vision field. Many works focus on how to design efficient channel attention mechanisms while ignoring a fundamental problem, i.e., channel attention mechanism uses scalar to represent channel, which is difficult due to massive information loss. In this work, we start from a different view and regard the channel representation problem as a compression process using frequency analysis. Based on the frequency analysis, we mathematically prove that the conventional global average pooling is a special case of the feature decomposition in the frequency domain. With the proof, we naturally generalize the compression of the channel attention mechanism in the frequency domain and propose our method with multi-spectral channel attention, termed as FcaNet. FcaNet is simple but effective. We can change a few lines of code in the calculation to implement our method within existing channel attention methods. Moreover, the proposed method achieves state-of-the-art results compared with other channel attention methods on image classification, object detection, and instance segmentation tasks. Our method could consistently outperform the baseline SENet, with the same number of parameters and the same computational cost. Our code and models are publicly available at https://github.com/cfzd/FcaNet.

Related Material

[pdf] [arXiv]
@InProceedings{Qin_2021_ICCV, author = {Qin, Zequn and Zhang, Pengyi and Wu, Fei and Li, Xi}, title = {FcaNet: Frequency Channel Attention Networks}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {783-792} }