Adaptive Convolutions With Per-Pixel Dynamic Filter Atom

Ze Wang, Zichen Miao, Jun Hu, Qiang Qiu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12302-12311

Abstract


Applying feature dependent network weights have been proved to be effective in many fields. However, in practice, restricted by the enormous size of model parameters and memory footprints, scalable and versatile dynamic convolutions with per-pixel adapted filters are yet to be fully explored. In this paper, we address this challenge by decomposing filters, adapted to each spatial position, over dynamic filter atoms generated by a light-weight network from local features. Adaptive receptive fields can be supported by further representing each filter atom over sets of pre-fixed multi-scale bases. As plug-and-play replacements to convolutional layers, the introduced adaptive convolutions with per-pixel dynamic atoms enable explicit modeling of intra-image variance, while avoiding heavy computation, parameters, and memory cost. Our method preserves the appealing properties of conventional convolutions as being translation-equivariant and parametrically efficient. We present experiments to show that, the proposed method delivers comparable or even better performance across tasks, and are particularly effective on handling tasks with significant intra-image variance.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wang_2021_ICCV, author = {Wang, Ze and Miao, Zichen and Hu, Jun and Qiu, Qiang}, title = {Adaptive Convolutions With Per-Pixel Dynamic Filter Atom}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12302-12311} }