Learning Unsupervised Metaformer for Anomaly Detection

Jhih-Ciang Wu, Ding-Jie Chen, Chiou-Shann Fuh, Tyng-Luh Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 4369-4378

Abstract


Anomaly detection (AD) aims to address the task of classification or localization of image anomalies. This paper addresses two pivotal issues of reconstruction-based approaches to AD in images, namely, model adaptation and reconstruction gap. The former generalizes an AD model to tackling a broad range of object categories, while the latter provides useful clues for localizing abnormal regions. At the core of our method is an unsupervised universal model, termed as Metaformer, which leverages both meta-learned model parameters to achieve high model adaptation capability and instance-aware attention to emphasize the focal regions for localizing abnormal regions, i.e., to explore the reconstruction gap at those regions of interest. We justify the effectiveness of our method with SOTA results on the MVTec AD dataset of industrial images and highlight the adaptation flexibility of the universal Metaformer with multi-class and few-shot scenarios.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wu_2021_ICCV, author = {Wu, Jhih-Ciang and Chen, Ding-Jie and Fuh, Chiou-Shann and Liu, Tyng-Luh}, title = {Learning Unsupervised Metaformer for Anomaly Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {4369-4378} }