Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters

James Seale Smith, Yen-Chang Hsu, Zsolt Kira, Yilin Shen, Hongxia Jin; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1744-1754

Abstract


Recent work has demonstrated a remarkable ability to customize text-to-image diffusion models to multiple fine-grained concepts in a sequential (i.e. continual) manner while only providing a few example images for each concept. This setting is known as continual diffusion. Here we ask the question: Can we scale these methods to longer concept sequences without forgetting? Although prior work mitigates the forgetting of previously learned concepts we show that its capacity to learn new tasks reaches saturation over longer sequences. We address this challenge by introducing a novel method STack-And-Mask INcremental Adapters (STAMINA) which is composed of low-ranked attention-masked adapters and customized MLP tokens. STAMINA is designed to enhance the robust fine-tuning properties of LoRA for sequential concept learning via learnable hard-attention masks parameterized with low rank MLPs enabling precise scalable learning via sparse adaptation. Notably all introduced trainable parameters can be folded back into the model after training inducing no additional inference parameter costs. We show that STAMINA outperforms the prior SOTA for the setting of text-to-image continual customization on a 50-concept benchmark composed of landmarks and human faces with no stored replay data. Additionally we extended our method to the setting of continual learning for image classification demonstrating that our gains also translate to state-of-the-art performance in this standard benchmark.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Smith_2024_CVPR, author = {Smith, James Seale and Hsu, Yen-Chang and Kira, Zsolt and Shen, Yilin and Jin, Hongxia}, title = {Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1744-1754} }