Flip-Invariant Motion Representation

Takumi Kobayashi; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5628-5637

Abstract


In action recognition, local motion descriptors contribute to effectively representing video sequences where target actions appear in localized spatio-temporal regions. For robust recognition, those fundamental descriptors are required to be invariant against horizontal (mirror) flipping in video frames which frequently occurs due to changes of camera viewpoints and action directions, deteriorating classification performance. In this paper, we propose methods to render flip invariance to the local motion descriptors by two approaches. One method leverages local motion flows to ensure the invariance on input patches where the descriptors are computed. The other derives a invariant form theoretically from the flipping transformation applied to hand-crafted descriptors. The method is also extended so as to deal with ConvNet descriptors through learning the invariant form based on data. The experimental results on human action classification show that the proposed methods favorably improve performance both of the handcrafted and the ConvNet descriptors.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Kobayashi_2017_ICCV,
author = {Kobayashi, Takumi},
title = {Flip-Invariant Motion Representation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}