Foreground-Background Separation through Concept Distillation from Generative Image Foundation Models

Mischa Dombrowski, Hadrien Reynaud, Matthew Baugh, Bernhard Kainz; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 988-998

Abstract


Curating datasets for object segmentation is a difficult task. With the advent of large-scale pre-trained generative models, conditional image generation has been given a significant boost in result quality and ease of use. In this paper, we present a novel method that enables the generation of general foreground-background segmentation models from simple textual descriptions, without requiring segmentation labels. We leverage and explore pre-trained latent diffusion models, to automatically generate weak segmentation masks for concepts and objects. The masks are then used to fine-tune the diffusion model on an inpainting task, which enables fine-grained removal of the object, while at the same time providing a synthetic foreground and background dataset. We demonstrate that using this method beats previous methods in both discriminative and generative performance and closes the gap with fully supervised training while requiring no pixel-wise object labels. We show results on the task of segmenting four different objects (humans, dogs, cars, birds) and a use case scenario in medical image analysis. The code is available at https://github.com/MischaD/fobadiffusion.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Dombrowski_2023_ICCV, author = {Dombrowski, Mischa and Reynaud, Hadrien and Baugh, Matthew and Kainz, Bernhard}, title = {Foreground-Background Separation through Concept Distillation from Generative Image Foundation Models}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {988-998} }