-
[pdf]
[bibtex]@InProceedings{Choi_2025_WACV, author = {Choi, Seunghwan and Yun, Jooyeol and Park, Jeonghoon and Choo, Jaegul}, title = {Disentangling Subject-Irrelevant Elements in Personalized Text-to-Image Diffusion via Filtered Self-Distillation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {9055-9064} }
Disentangling Subject-Irrelevant Elements in Personalized Text-to-Image Diffusion via Filtered Self-Distillation
Abstract
Recent research has unveiled the development of customizing large-scale text-to-image models. These models bind a unique subject desired by a user to a specific token using the token to generate the subject in various contexts. However models from previous studies also bind elements unrelated to the subject's identity such as common backgrounds or poses in the reference images. This often leads to conflicts between the token and the context of text prompts during inference causing the model to fail to generate both the subject and the prompted context. In this work we approach this issue from a data scarcity perspective and propose to augment the number of reference images through a novel self-distillation framework. Our framework selects high-quality samples from images generated by a teacher model and uses them in student training. Our framework can be applied to any models that suffer from the conflicts and we demonstrate that our framework most effectively resolves the issue through comprehensive evaluations.
Related Material