SSN: Learning Sparse Switchable Normalization via SparsestMax

Wenqi Shao, Tianjian Meng, Jingyu Li, Ruimao Zhang, Yudian Li, Xiaogang Wang, Ping Luo; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 443-451

Abstract


Normalization methods improve both optimization and generalization of ConvNets. To further boost performance, the recently-proposed switchable normalization (SN) provides a new perspective for deep learning: it learns to select different normalizers for different convolution layers of a ConvNet. However, SN uses softmax function to learn importance ratios to combine normalizers, leading to redundant computations compared to a single normalizer. This work addresses this issue by presenting Sparse Switchable Normalization (SSN) where the importance ratios are constrained to be sparse. Unlike l_1 and l_0 constraints that impose difficulties in optimization, we turn this constrained optimization problem into feed-forward computation by proposing SparsestMax, which is a sparse version of softmax. SSN has several appealing properties. (1) It inherits all benefits from SN such as applicability in various tasks and robustness to a wide range of batch sizes. (2) It is guaranteed to select only one normalizer for each normalization layer, avoiding redundant computations. (3) SSN can be transferred to various tasks in an end-to-end manner. Extensive experiments show that SSN outperforms its counterparts on various challenging benchmarks such as ImageNet, Cityscapes, ADE20K, and Kinetics. Code is available at https://github.com/switchablenorms/Sparse_SwitchNorm.

Related Material


[pdf]
[bibtex]
@InProceedings{Shao_2019_CVPR,
author = {Shao, Wenqi and Meng, Tianjian and Li, Jingyu and Zhang, Ruimao and Li, Yudian and Wang, Xiaogang and Luo, Ping},
title = {SSN: Learning Sparse Switchable Normalization via SparsestMax},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}