Augmentation Invariant Training

Weicong Chen, Lu Tian, Liwen Fan, Yu Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Data augmentation is acknowledged to help deep neural networks generalize better, while their augmentation transfer ability is far from satisfactory. The networks perform worse when tested with augmentations not used during training, which is also a manifestation of insufficient generalization ability. To address this problem, we carefully design a novel Augmentation invariant Loss (AiLoss) to assist networks to learn augmentation invariant by minimizing intra-augmentation variation. Based on AiLoss, we propose a simple yet efficient training strategy, Augmentation Invariant Training (AIT), to enhance the generalization ability of networks. Extensive experiments show that AIT can be applied to a variety of network architectures, and consistently improve their performance on CIFAR-10, CIFAR-100 and ImageNet without increasing computational cost. Further extending AIT to multiple networks, we propose multi-AIT to learn inter-network augmentation invariant, which achieves better performance in enhancing generalization ability. Moreover, further experiments present that networks trained with our strategy do obtain better augmentation transfer ability and learn features that are invariant to augmentations. Our source code is available at Github.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2019_ICCV,
author = {Chen, Weicong and Tian, Lu and Fan, Liwen and Wang, Yu},
title = {Augmentation Invariant Training},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}