Unseen Visual Anomaly Generation

Han Sun, Yunkang Cao, Hao Dong, Olga Fink; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 25508-25517

Abstract


Visual anomaly detection (AD) presents significant challenges due to the scarcity of anomalous data samples. While numerous works have been proposed to synthesize anomalous samples, these synthetic anomalies often lack authenticity or require extensive training data, limiting their applicability in real-world scenarios. In this work, we propose Anomaly Anything (AnomalyAny), a novel framework that leverages Stable Diffusion (SD)'s image generation capabilities to generate diverse and realistic unseen anomalies. By conditioning on a single normal sample during test time, AnomalyAny is able to generate unseen anomalies for arbitrary object types with text descriptions. Within AnomalyAny, we propose attention-guided anomaly optimization to direct SD's attention on generating hard anomaly concepts. Additionally, we introduce prompt-guided anomaly refinement, incorporating detailed descriptions to further improve the generation quality. Extensive experiments on MVTec AD and VisA datasets demonstrate AnomalyAny's ability in generating high-quality unseen anomalies and its effectiveness in enhancing downstream AD performance. Our demo and code are available at https://hansunhayden.github.io/CUT.github.io.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Sun_2025_CVPR, author = {Sun, Han and Cao, Yunkang and Dong, Hao and Fink, Olga}, title = {Unseen Visual Anomaly Generation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {25508-25517} }