-
[pdf]
[bibtex]@InProceedings{Zhao_2024_WACV, author = {Zhao, Ganning and Magoulianitis, Vasileios and You, Suya and Kuo, C.-C. Jay}, title = {A Lightweight Generalizable Evaluation and Enhancement Framework for Generative Models and Generated Samples}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2024}, pages = {450-459} }
A Lightweight Generalizable Evaluation and Enhancement Framework for Generative Models and Generated Samples
Abstract
While extensive research has been conducted on evaluating generative models, little research has been conducted on the quality assessment and enhancement of individual-generated samples. We propose a lightweight generalizable evaluation framework, designed to evaluate and enhance the generative models and generated samples. Our framework trains a classifier-based dataset-specific model, enabling its application to unseen generative models and extending its compatibility with both deep learning and efficient machine learning-based methods. We propose three novel evaluation metrics aiming at capturing distribution correlation, quality, and diversity of generated samples. These metrics collectively offer a more thorough performance evaluation of generative models compared to the Frechet Inception Distance (FID). Our approach assigns individual quality scores to each generated sample for sample-level evaluation. This enables better sample mining and thereby improves the performance of generative models by filtering out lower-quality generations. Extensive experiments across various datasets and generative models demonstrate the effectiveness and efficiency of the proposed method.
Related Material