-
[pdf]
[arXiv]
[bibtex]@InProceedings{Carreira_2024_CVPR, author = {Carreira, Jo\~ao and King, Michael and Patraucean, Viorica and Gokay, Dilara and Ionescu, Catalin and Yang, Yi and Zoran, Daniel and Heyward, Joseph and Doersch, Carl and Aytar, Yusuf and Damen, Dima and Zisserman, Andrew}, title = {Learning from One Continuous Video Stream}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {28751-28761} }
Learning from One Continuous Video Stream
Abstract
We introduce a framework for online learning from a single continuous video stream - the way people and animals learn without mini-batches data augmentation or shuffling. This poses great challenges given the high correlation between consecutive video frames and there is very little prior work on it. Our framework allows us to do a first deep dive into the topic and includes a collection of streams and tasks composed from two existing video datasets plus methodology for performance evaluation that considers both adaptation and generalization. We employ pixel-to-pixel modelling as a practical and flexible way to switch between pre-training and single-stream evaluation as well as between arbitrary tasks without ever requiring changes to models and always using the same pixel loss. Equipped with this framework we obtained large single-stream learning gains from pre-training with a novel family of future prediction tasks found that momentum hurts and that the pace of weight updates matters. The combination of these insights leads to matching the performance of IID learning with batch size 1 when using the same architecture and without costly replay buffers.
Related Material