Self-supervised Sparse to Dense Motion Segmentation

Amirhossein Kardoost, Kalun Ho, Peter Ochs, Margret Keuper; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


Observable motion in videos can give rise to the definition of objects moving with respect to the scene. The task of segmenting such moving objects is referred to as motion segmentationand is usually tackled either by aggregating motion information in long, sparse point trajectories, or by directly producing per frame dense segmentations relying on large amounts of training data.In this paper, we propose a self supervised method to learn the densification of sparse motion segmentations from single video frames. While previous approaches towards motion segmentation build upon pre-training on large surrogate datasets and use dense motion information as an essential cue for the pixelwise segmentation, our model does not require pre-training and operates at test time on single frames. It can be trained in a sequence specific way to produce high quality dense segmentations from sparse and noisy input. We evaluate our method on the well-known motion segmentation datasets FBMS59 and DAVIS2016.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Kardoost_2020_ACCV, author = {Kardoost, Amirhossein and Ho, Kalun and Ochs, Peter and Keuper, Margret}, title = {Self-supervised Sparse to Dense Motion Segmentation}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }