Compressing the Input for CNNs with the First-Order Scattering Transform

Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, Michal Valko; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 301-316

Abstract


We consider the first-order scattering transform as a candidate for reducing the signal processed by a convolutional neural network (CNN). We study this transformation and show theoretical and empirical evidence that in the case of natural images and sufficiently small translation invariance, this transform preserves most of the signal information needed for classification while substantially reducing the spatial resolution and total signal size. We demonstrate that cascading a CNN with this representations permits to perform on par with Imagenet classification models commonly used in downstream tasks such as the Resnet-50. We subsequently apply our Imagenet trained hybrid model as a base model on a detection system, which typically has larger image inputs. On Pascal VOC and COCO detection tasks we find this leads to substantial improvements in inference speed and training memory consumption compared to models trained directly on the input image.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Oyallon_2018_ECCV,
author = {Oyallon, Edouard and Belilovsky, Eugene and Zagoruyko, Sergey and Valko, Michal},
title = {Compressing the Input for CNNs with the First-Order Scattering Transform},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}