ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object

Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 21752-21762

Abstract


We establish rigorous benchmarks for visual perception robustness. Synthetic images such as ImageNet-C ImageNet-9 and Stylized ImageNet provide specific type of evaluation over synthetic corruptions backgrounds and textures yet those robustness benchmarks are restricted in specified variations and have low synthetic quality. In this work we introduce generative model as a data source for synthesizing hard images that benchmark deep models' robustness. Leveraging diffusion models we are able to generate images with more diversified backgrounds textures and materials than any prior work where we term this benchmark as ImageNet-D. Experimental results show that ImageNet-D results in a significant accuracy drop to a range of vision models from the standard ResNet visual classifier to the latest foundation models like CLIP and MiniGPT-4 significantly reducing their accuracy by up to 60%. Our work suggests that diffusion models can be an effective source to test vision models. The code and dataset are available at https://github.com/chenshuang-zhang/imagenet_d.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Chenshuang and Pan, Fei and Kim, Junmo and Kweon, In So and Mao, Chengzhi}, title = {ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {21752-21762} }