UnSAMFlow: Unsupervised Optical Flow Guided by Segment Anything Model

Shuai Yuan, Lei Luo, Zhuo Hui, Can Pu, Xiaoyu Xiang, Rakesh Ranjan, Denis Demandolx; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 19027-19037

Abstract


Traditional unsupervised optical flow methods are vulnerable to occlusions and motion boundaries due to lack of object-level information. Therefore we propose UnSAMFlow an unsupervised flow network that also leverages object information from the latest foundation model Segment Anything Model (SAM). We first include a self-supervised semantic augmentation module tailored to SAM masks. We also analyze the poor gradient landscapes of traditional smoothness losses and propose a new smoothness definition based on homography instead. A simple yet effective mask feature module has also been added to further aggregate features on the object level. With all these adaptations our method produces clear optical flow estimation with sharp boundaries around objects which outperforms state-of-the-art methods on both KITTI and Sintel datasets. Our method also generalizes well across domains and runs very efficiently.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yuan_2024_CVPR, author = {Yuan, Shuai and Luo, Lei and Hui, Zhuo and Pu, Can and Xiang, Xiaoyu and Ranjan, Rakesh and Demandolx, Denis}, title = {UnSAMFlow: Unsupervised Optical Flow Guided by Segment Anything Model}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {19027-19037} }