TrustMAE: A Noise-Resilient Defect Classification Framework Using Memory-Augmented Auto-Encoders With Trust Regions

Daniel Stanley Tan, Yi-Chun Chen, Trista Pei-Chun Chen, Wei-Chao Chen; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 276-285

Abstract


In this paper, we propose a framework called TrustMAE to address the problem of product defect classification. Instead of relying on defective images that are difficult to collect and laborious to label, our framework can accept datasets with unlabeled images. Moreover, unlike most anomaly detection methods, our approach is robust against noises, or defective images, in the training dataset. Our framework uses a memory-augmented auto-encoder with a sparse memory addressing scheme to avoid over-generalizing the auto-encoder, and a novel trust-region memory updating scheme to keep the noises away from the memory slots. The result is a framework that can reconstruct defect-free images and identify the defective regions using a perceptual distance network. When compared against various state-of-the-art baselines, our approach performs competitively under noise-free MVTec datasets. More importantly, it remains effective at a noise level up to 40% while significantly outperforming other baselines.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Tan_2021_WACV, author = {Tan, Daniel Stanley and Chen, Yi-Chun and Chen, Trista Pei-Chun and Chen, Wei-Chao}, title = {TrustMAE: A Noise-Resilient Defect Classification Framework Using Memory-Augmented Auto-Encoders With Trust Regions}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {276-285} }