Semantic Attention Flow Fields for Monocular Dynamic Scene Decomposition

Yiqing Liang, Eliot Laidlaw, Alexander Meyerowitz, Srinath Sridhar, James Tompkin; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 21797-21806

Abstract


From video, we reconstruct a neural volume that captures time-varying color, density, scene flow, semantics, and attention information. The semantics and attention let us identify salient foreground objects separately from the background across spacetime. To mitigate low resolution semantic and attention features, we compute pyramids that trade detail with whole-image context. After optimization, we perform a saliency-aware clustering to decompose the scene. To evaluate real-world scenes, we annotate object masks in the NVIDIA Dynamic Scene and DyCheck datasets. We demonstrate that this method can decompose dynamic scenes in an unsupervised way with competitive performance to a supervised method, and that it improves foreground/background segmentation over recent static/dynamic split methods. Project webpage: https://visual.cs.brown.edu/saff

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liang_2023_ICCV, author = {Liang, Yiqing and Laidlaw, Eliot and Meyerowitz, Alexander and Sridhar, Srinath and Tompkin, James}, title = {Semantic Attention Flow Fields for Monocular Dynamic Scene Decomposition}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {21797-21806} }