SAVeD: Learning to Denoise Low-SNR Video for Improved Downstream Performance

Suzanne Stathatos, Michael Hobley, Pietro Perona, Markus Marks; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026, pp. 6851-6861

Abstract


Low signal-to-noise ratio (SNR) videos -- such as those from underwater sonar, ultrasound, and microscopy -- pose significant challenges for computer vision models, particularly when paired clean imagery for denoising is unavailable. We present Spatiotemporal Augmentations and denoising in Video for Downstream Tasks (SAVeD), a novel self-supervised method that denoises low-SNR sensor videos using only raw noisy data. By leveraging distinctions between foreground and background motion and exaggerating objects with stronger motion signal, SAVeD enhances foreground object visibility and reduces background and camera noise without requiring clean video. SAVeD has a set of architectural optimizations that lead to faster throughput, training, and inference than existing deep learning methods. We also introduce a new denoising metric, FBD, which indicates foreground-background divergence for detection datasets without requiring clean imagery. Our approach achieves state-of-the-art results for classification, detection, tracking, and counting tasks and it does so with fewer training resource requirements than existing deep-learning-based denoising methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Stathatos_2026_WACV, author = {Stathatos, Suzanne and Hobley, Michael and Perona, Pietro and Marks, Markus}, title = {SAVeD: Learning to Denoise Low-SNR Video for Improved Downstream Performance}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {March}, year = {2026}, pages = {6851-6861} }