LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights

Thibault Castells, Hyoung-Kyu Song, Bo-Kyeong Kim, Shinkook Choi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 821-830

Abstract


Latent Diffusion Models (LDMs) have emerged as powerful generative models known for delivering remarkable results under constrained computational resources. However deploying LDMs on resource-limited devices remains a complex issue presenting challenges such as memory consumption and inference speed. To address this issue we introduce LD-Pruner a novel performance-preserving structured pruning method for compressing LDMs. Traditional pruning methods for deep neural networks are not tailored to the unique characteristics of LDMs such as the high computational cost of training and the absence of a fast straightforward and task-agnostic method for evaluating model performance. Our method tackles these challenges by leveraging the latent space during the pruning process enabling us to effectively quantify the impact of pruning on model performance independently of the task at hand. This targeted pruning of components with minimal impact on the output allows for faster convergence during training as the model has less information to re-learn thereby addressing the high computational cost of training. Consequently our approach achieves a compressed model that offers improved inference speed and reduced parameter count while maintaining minimal performance degradation. We demonstrate the effectiveness of our approach on three different tasks: text-to-image (T2I) generation Unconditional Image Generation (UIG) and Unconditional Audio Generation (UAG). Notably we reduce the inference time of Stable Diffusion (SD) by 34.9% while simultaneously improving its FID by 5.2% on MS-COCO T2I benchmark. This work paves the way for more efficient pruning methods for LDMs enhancing their applicability.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Castells_2024_CVPR, author = {Castells, Thibault and Song, Hyoung-Kyu and Kim, Bo-Kyeong and Choi, Shinkook}, title = {LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {821-830} }