Convolutional Auto-Encoder With Tensor-Train Factorization
Convolutional auto-encoders (CAEs) are extensively used for general purpose feature extraction, image reconstruction, image denoising, and other machine learning tasks. Despite their many successes, similar to other convolutional networks, CAEs often suffer from over-parameterization when trained with small or moderate-sized datasets. In such cases, CAEs suffer from excess computational and memory overhead as well as decreased performance due to parameter over-fitting. In this work we introduce CAE-TT: a CAE with tunable tensor-train (TT) structure to its convolution and transpose-convolution filters. By tuning the TT-ranks, CAE-TT can adjust the number of its learning parameters without changing the network architecture. In our numerical studies, we demonstrate the performance of the proposed method and compare it with alternatives, in both batch and online learning settings.