MoA-Net: Self-Supervised Motion Segmentation

Pia Bideau, Rakesh R. Menon, Erik Learned-Miller; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Most recent approaches to motion segmentation use optical flow to segment an image into the static environment and independently moving objects. Neural network based approaches usually require large amounts of labeled training data to achieve state-of-the-art performance. In this work we propose a new approach to train a motion segmentation network in a self-supervised manner. Inspired by visual ecology, the human visual system, and by prior approaches to motion modeling, we break down the problem of motion segmentation into two smaller subproblems: (1) modifying the flow field to remove the observer’s rotation and (2) segmenting the rotation-compensated flow into static environment and independently moving objects. Compensating for rotation leads to essential simplifications that allow us to describe an independently moving object with just a few criteria which can be learned by our new motion segmentation network the Motion Angle Network (MoANet). We compare our network with two other motion segmentation networks and show state-of-the-art performance on Sintel.

Related Material


[pdf]
[bibtex]
@InProceedings{Bideau_2018_ECCV_Workshops,
author = {Bideau, Pia and Menon, Rakesh R. and Learned-Miller, Erik},
title = {MoA-Net: Self-Supervised Motion Segmentation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}