-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Khaki_2024_CVPR, author = {Khaki, Samir and Sajedi, Ahmad and Wang, Kai and Liu, Lucy Z. and Lawryshyn, Yuri A. and Plataniotis, Konstantinos N.}, title = {ATOM: Attention Mixer for Efficient Dataset Distillation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7692-7702} }
ATOM: Attention Mixer for Efficient Dataset Distillation
Abstract
Recent works in dataset distillation seek to minimize training expenses by generating a condensed synthetic dataset that encapsulates the information present in a larger real dataset. These approaches ultimately aim to attain test accuracy levels akin to those achieved by models trained on the entirety of the original dataset. Previous studies in feature and distribution matching have achieved significant results without incurring the costs of bi-level optimization in the distillation process. Despite their convincing efficiency many of these methods suffer from marginal downstream performance improvements limited distillation of contextual information and subpar cross-architecture generalization. To address these challenges in dataset distillation we propose the ATtentiOn Mixer (ATOM) module to efficiently distill large datasets using a mixture of channel and spatial-wise attention in the feature matching process. Spatial-wise attention helps guide the learning process based on consistent localization of classes in their respective images allowing for distillation from a broader receptive field. Meanwhile channel-wise attention captures the contextual information associated with the class itself thus making the synthetic image more informative for training. By integrating both types of attention our ATOM module demonstrates superior performance across various computer vision datasets including CIFAR10/100 and TinyImagenet. Notably our method significantly improves performance in scenarios with a low number of images per class thereby enhancing its potential. Furthermore we maintain the improvement on cross-architectures and applications such as neural architecture search.
Related Material