LatentDR: Improving Model Generalization Through Sample-Aware Latent Degradation and Restoration

Ran Liu, Sahil Khose, Jingyun Xiao, Lakshmi Sathidevi, Keerthan Ramnath, Zsolt Kira, Eva L. Dyer; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 2669-2679

Abstract


Despite significant advances in deep learning, models often struggle to generalize well to new, unseen domains, especially when training data is limited. To address this challenge, we propose a novel approach for distribution-aware latent augmentation that leverages the relationships across samples to guide the augmentation procedure. Our approach first degrades the samples stochastically in the latent space, mapping them to augmented labels, and then restores the samples from their corrupted versions during training. This process confuses the classifier in the degradation step and restores the overall class distribution of the original samples, promoting diverse intra-class/cross-domain variability. We extensively evaluate our approach on a diverse set of datasets and tasks, including domain generalization benchmarks and medical imaging datasets with strong domain shift, where we show our approach achieves significant improvements over existing methods for latent space augmentation. We further show that our method can be flexibly adapted to long-tail recognition tasks, demonstrating its versatility in building more generalizable models. Code is at https://github.com/nerdslab/LatentDR.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Liu_2024_WACV, author = {Liu, Ran and Khose, Sahil and Xiao, Jingyun and Sathidevi, Lakshmi and Ramnath, Keerthan and Kira, Zsolt and Dyer, Eva L.}, title = {LatentDR: Improving Model Generalization Through Sample-Aware Latent Degradation and Restoration}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {2669-2679} }