Convolutional Auto-Encoder With Tensor-Train Factorization

Manish Sharma, Panos P. Markopoulos, Eli Saber, M. Salman Asif, Ashley Prater-Bennette; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 198-206

Abstract


Convolutional auto-encoders (CAEs) are extensively used for general purpose feature extraction, image reconstruction, image denoising, and other machine learning tasks. Despite their many successes, similar to other convolutional networks, CAEs often suffer from over-parameterization when trained with small or moderate-sized datasets. In such cases, CAEs suffer from excess computational and memory overhead as well as decreased performance due to parameter over-fitting. In this work we introduce CAE-TT: a CAE with tunable tensor-train (TT) structure to its convolution and transpose-convolution filters. By tuning the TT-ranks, CAE-TT can adjust the number of its learning parameters without changing the network architecture. In our numerical studies, we demonstrate the performance of the proposed method and compare it with alternatives, in both batch and online learning settings.

Related Material


[pdf]
[bibtex]
@InProceedings{Sharma_2021_ICCV, author = {Sharma, Manish and Markopoulos, Panos P. and Saber, Eli and Asif, M. Salman and Prater-Bennette, Ashley}, title = {Convolutional Auto-Encoder With Tensor-Train Factorization}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {198-206} }