GlobalFlowNet: Video Stabilization Using Deep Distilled Global Motion Estimates

Jerin Geo James, Devansh Jain, Ajit Rajwade; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5078-5087

Abstract


Videos shot by laymen using hand-held cameras contain undesirable shaky motion. Estimating the global motion between successive frames, in a manner not influenced by moving objects, is central to many video stabilization techniques, but poses significant challenges. A large body of work uses 2D affine transformations or homography for the global motion. However, in this work, we introduce a more general representation scheme, which adapts any existing optical flow network to ignore the moving objects and obtain a spatially smooth approximation of the global motion between video frames. We achieve this by a knowledge distillation approach, where we first introduce a low pass filter module into the optical flow network to constrain the predicted optical flow to be spatially smooth. This becomes our student network, named as GLOBALFLOWNET. Then, using the original optical flow network as the teacher network, we train the student network using a robust loss function. Given a trained GLOBALFLOWNET, we stabilize videos using a two stage process. In the first stage, we correct the instability in affine parameters using a quadratic programming approach constrained by a user-specified cropping limit to control loss of field of view. In the second stage, we stabilize the video further by smoothing global motion parameters, expressed using small number of discrete cosine transform coefficients. In extensive experiments on a variety of different videos, our technique outperforms state of the art techniques in terms of subjective quality and different quantitative measures of video stability. Additionally, we present a new measure for evaluation of video stabilization based on the flow generated by GLOBALFLOWNET and argue that it is based on a more general motion model in contrast to the affine motion model on which most existing measures are based. The source code is publicly available at https://github.com/GlobalFlowNet/GlobalFlowNet

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{James_2023_WACV, author = {James, Jerin Geo and Jain, Devansh and Rajwade, Ajit}, title = {GlobalFlowNet: Video Stabilization Using Deep Distilled Global Motion Estimates}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {5078-5087} }