Gradient Forward-Propagation for Large-Scale Temporal Video Modelling

Mateusz Malinowski, Dimitrios Vytiniotis, Grzegorz Swirszcz, Viorica Patraucean, Joao Carreira; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 9249-9259

Abstract


How can neural networks be trained on large-volume temporal data efficiently? To compute the gradients required to update parameters, backpropagation blocks computations until the forward and backward passes are completed. For temporal signals, this introduces high latency and hinders real-time learning. It also creates a coupling between consecutive layers, which limits model parallelism and increases memory consumption. In this paper, we build upon Sideways, which avoids blocking by propagating approximate gradients forward in time, by proposing mechanisms for temporal integration of information based on different variants of skip connections. We also show how to decouple computation and delegate individual neural modules to different devices, allowing distributed and parallel training. The proposed Skip-sideways achieves low latency training, model parallelism, and, importantly, is capable of extracting temporal features, leading to more stable training and improved performance on real-world video datasets such as HMDB51, UCF101, and the large-scale Kinetics600. Finally, we also show that models trained with Skip-sideways generate better future frames than Sideways models, and hence they can better utilize motion cues.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Malinowski_2021_CVPR, author = {Malinowski, Mateusz and Vytiniotis, Dimitrios and Swirszcz, Grzegorz and Patraucean, Viorica and Carreira, Joao}, title = {Gradient Forward-Propagation for Large-Scale Temporal Video Modelling}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {9249-9259} }