Linearized Multi-Sampling for Differentiable Image Transformation

Wei Jiang, Weiwei Sun, Andrea Tagliasacchi, Eduard Trulls, Kwang Moo Yi; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 2988-2997

Abstract


We propose a novel image sampling method for differentiable image transformation in deep neural networks. The sampling schemes currently used in deep learning, such as Spatial Transformer Networks, rely on bilinear interpolation, which performs poorly under severe scale changes, and more importantly, results in poor gradient propagation. This is due to their strict reliance on direct neighbors. Instead, we propose to generate random auxiliary samples in the vicinity of each pixel in the sampled image, and create a linear approximation with their intensity values. We then use this approximation as a differentiable formula for the transformed image. We demonstrate that our approach produces more representative gradients with a wider basin of convergence for image alignment, which leads to considerable performance improvements when training networks for registration and classification tasks. This is not only true under large downsampling, but also when there are no scale changes. We compare our approach with multi-scale sampling and show that we outperform it. We then demonstrate that our improvements to the sampler are compatible with other tangential improvements to Spatial Transformer Networks and that it further improves their performance.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Jiang_2019_ICCV,
author = {Jiang, Wei and Sun, Weiwei and Tagliasacchi, Andrea and Trulls, Eduard and Yi, Kwang Moo},
title = {Linearized Multi-Sampling for Differentiable Image Transformation},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}