-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Feng_2025_WACV, author = {Feng, Chen and Danier, Duolikun and Zhang, Fan and Mackin, Alex and Collins, Andrew and Bull, David}, title = {MVAD: A Multiple Visual Artifact Detector for Video Streaming}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3148-3158} }
MVAD: A Multiple Visual Artifact Detector for Video Streaming
Abstract
Visual artifacts are often introduced into streamed video content due to prevailing conditions during content production and delivery. Since these can degrade the quality of the user's experience it is important to automatically and accurately detect them in order to enable effective quality measurement and enhancement. Existing detection methods often focus on a single type of artifact and/or determine the presence of an artifact through thresholding objective quality indices. Such approaches have been reported to offer inconsistent prediction performance and are also impractical for real-world applications where multiple artifacts co-exist and interact. In this paper we propose a Multiple Visual Artifact Detector MVAD for video streaming which for the first time is able to detect multiple artifacts using a single framework that is not reliant on video quality assessment models. Our approach employs a new Artifact-aware Dynamic Feature Extractor (ADFE) to obtain artifact-relevant spatial features within each frame for multiple artifact types. The extracted features are further processed by a Recurrent Memory Vision Transformer (RMViT) module which captures both short-term and long-term temporal information within the input video. The proposed network architecture is optimized in an end-to-end manner based on a new large and diverse training database that is generated by simulating the video streaming pipeline and based on Adversarial Data Augmentation. This model has been evaluated on two video artifact databases Maxwell and BVI-Artifact and achieves consistent and improved prediction results for ten target visual artifacts when compared to seven existing single and multiple artifact detectors. The source code and training database will be available at https://chenfeng-bristol.github.io/MVAD/.
Related Material