Learning Pixel Trajectories With Multiscale Contrastive Random Walks

Zhangxing Bian, Allan Jabri, Alexei A. Efros, Andrew Owens; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 6508-6519

Abstract


A range of video modeling tasks, from optical flow to multiple object tracking, share the same fundamental challenge: establishing space-time correspondence. Yet, approaches that dominate each space differ. We take a step towards bridging this gap by extending the recent contrastive random walk formulation to much more dense, pixel-level space-time graphs. The main contribution is introducing hierarchy into the search problem by computing the transition matrix in a coarse-to-fine manner, forming a multiscale contrastive random walk. This establishes a unified technique for self-supervised learning of optical flow, keypoint tracking, and video object segmentation. Experiments demonstrate that, for each of these tasks, our unified model achieves performance competitive with strong self-supervised approaches specific to that task.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Bian_2022_CVPR, author = {Bian, Zhangxing and Jabri, Allan and Efros, Alexei A. and Owens, Andrew}, title = {Learning Pixel Trajectories With Multiscale Contrastive Random Walks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {6508-6519} }