-
[pdf]
[arXiv]
[bibtex]@InProceedings{Jiang_2021_CVPR, author = {Jiang, Shihao and Lu, Yao and Li, Hongdong and Hartley, Richard}, title = {Learning Optical Flow From a Few Matches}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {16592-16600} }
Learning Optical Flow From a Few Matches
Abstract
State-of-the-art neural network models for optical flow estimation require a dense correlation volume at high resolutions for representing per-pixel displacement. Although the dense correlation volume is informative for accurate estimation, its heavy computation and memory usage hinders the efficient training and deployment of the models. In this paper, we show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it. Based on this observation, we propose an alternative displacement representation, named Sparse Correlation Volume, which is constructed directly by computing the k closest matches in one feature map for each feature vector in the other feature map and stored in a sparse data structure. Experiments show that our method can reduce computational cost and memory use significantly and produce fine-structure motion, while maintaining high accuracy compared to previous approaches with dense correlation volumes.
Related Material