-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Malakshan_2025_WACV, author = {Malakshan, Sahar Rahimi and Saadabadi, Mohammad Saeed Ebrahimi and Dabouei, Ali and Nasrabadi, Nasser}, title = {Decomposed Distribution Matching in Dataset Condensation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {7112-7122} }
Decomposed Distribution Matching in Dataset Condensation
Abstract
Dataset Condensation (DC) aims to reduce deep neural networks training efforts by synthesizing a small dataset such that it will be as effective as the original large dataset. Conventionally DC relies on a costly bi-level optimization which prohibits its practicality. Recent research formulates DC as a distribution matching problem which circumvents the costly bi-level optimization. However this efficiency sacrifices the DC performance. To investigate this performance degradation we decomposed the dataset distribution into content and style. Our observations indicate two major shortcomings of: 1) style discrepancy between original and condensed data and 2) limited intra-class diversity of condensed dataset. We present a simple yet effective method to match the style information between original and condensed data employing statistical moments of feature maps as well-established style indicators. Moreover we enhance the intra-class diversity by maximizing the Kullback-Leibler divergence within each synthetic class i.e. content. We demonstrate the efficacy of our method through experiments on diverse datasets of varying size and resolution achieving improvements of up to 4.1% on CIFAR10 4.2% on CIFAR100 4.3% on TinyImageNet 2.0% on ImageNet-1K 3.3% on ImageWoof 2.5% on ImageNette and 5.5% in continual learning accuracy.
Related Material