Learning an Augmented RGB Representation With Cross-Modal Knowledge Distillation for Action Detection

Rui Dai, Srijan Das, François Bremond; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13053-13064

Abstract


In video understanding, most cross-modal knowledge distillation (KD) methods are tailored for classification tasks, focusing on the discriminative representation of the trimmed videos. However, action detection requires not only categorizing actions, but also localizing them in untrimmed videos. Therefore, transferring knowledge pertaining to temporal relations is critical for this task which is missing in the previous cross-modal KD frameworks. To this end, we aim at learning an augmented RGB representation for action detection, taking advantage of additional modalities at training time through KD. We propose a KD framework consisting of two levels of distillation. On one hand, atomic-level distillation encourages the RGB student to learn the sub-representation of the actions from the teacher in a contrastive manner. On the other hand, sequence-level distillation encourages the student to learn the temporal knowledge from the teacher, which consists of transferring the Global Contextual Relations and the action Boundary Saliency. The result is an Augmented-RGB stream that can achieve competitive performance as the two-stream network while using only RGB at inference time. Extensive experimental analysis shows that our proposed distillation framework is generic and outperforms other popular cross-modal distillation methods in the action detection task.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Dai_2021_ICCV, author = {Dai, Rui and Das, Srijan and Bremond, Fran\c{c}ois}, title = {Learning an Augmented RGB Representation With Cross-Modal Knowledge Distillation for Action Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13053-13064} }