Mitigating and Evaluating Static Bias of Action Representations in the Background and the Foreground

Haoxin Li, Yuan Liu, Hanwang Zhang, Boyang Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 19911-19923

Abstract


In video action recognition, shortcut static features can interfere with the learning of motion features, resulting in poor out-of-distribution (OOD) generalization. The video background is clearly a source of static bias, but the video foreground, such as the clothing of the actor, can also provide static bias. In this paper, we empirically verify the existence of foreground static bias by creating test videos with conflicting signals from the static and moving portions of the video. To tackle this issue, we propose a simple yet effective technique, StillMix, to learn robust action representations. Specifically, StillMix identifies bias-inducing video frames using a 2D reference network and mixes them with videos for training, serving as effective bias suppression even when we cannot explicitly extract the source of bias within each video frame or enumerate types of bias. Finally, to precisely evaluate static bias, we synthesize two new benchmarks, SCUBA for static cues in the background, and SCUFO for static cues in the foreground. With extensive experiments, we demonstrate that StillMix mitigates both types of static bias and improves video representations for downstream applications.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Li_2023_ICCV, author = {Li, Haoxin and Liu, Yuan and Zhang, Hanwang and Li, Boyang}, title = {Mitigating and Evaluating Static Bias of Action Representations in the Background and the Foreground}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {19911-19923} }