Scaling Recurrent Models via Orthogonal Approximations in Tensor Trains

Ronak Mehta, Rudrasis Chakraborty, Yunyang Xiong, Vikas Singh; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 10571-10579

Abstract


Modern deep networks have proven to be very effective for analyzing real world images. However, their application in medical imaging is still in its early stages, primarily due to the large size of three-dimensional images, requiring enormous convolutional or fully connected layers - if we treat an image (and not image patches) as a sample. These issues only compound when the focus moves towards longitudinal analysis of 3D image volumes through recurrent structures, and when a point estimate of model parameters is insufficient in scientific applications where a reliability measure is necessary. Using insights from differential geometry, we adapt the tensor train decomposition to construct networks with significantly fewer parameters, allowing us to train powerful recurrent networks on whole brain image volume sequences. We describe the "orthogonal" tensor train, and demonstrate its ability to express a standard network layer both theoretically and empirically. We show its ability to effectively reconstruct whole brain volumes with faster convergence and stronger confidence intervals compared to the standard tensor train decomposition. We provide code and show experiments on the ADNI dataset using image sequences to regress on a cognition related outcome.

Related Material


[pdf]
[bibtex]
@InProceedings{Mehta_2019_ICCV,
author = {Mehta, Ronak and Chakraborty, Rudrasis and Xiong, Yunyang and Singh, Vikas},
title = {Scaling Recurrent Models via Orthogonal Approximations in Tensor Trains},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}