Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search

Yifan Jiang, Xinyu Gong, Junru Wu, Humphrey Shi, Zhicheng Yan, Zhangyang Wang; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 2554-2563

Abstract


Efficient video architecture is the key to the deployment of video action recognition systems on devices with limited computing capabilities. Unfortunately, existing video architectures are often computationally intensive and not suitable for such applications. The recent X3D work presents a new family of efficient video models by expanding a hand-crafted image architecture along multiple axes, such as space, time, width, and depth. Although operating in a conceptually large space, X3D searched one axis at a time, and merely explored a small set of 30 architectures in total, which does not sufficiently explore the space. This paper bypasses existing 2D architectures, and directly searched for 3D architectures in a fine-grained space, where block type, filter number, expansion ratio and attention block are jointly searched. A probabilistic neural architecture search method is adopted to efficiently search in such a large space. Evaluations on Kinetics and Something-Something-V2 benchmarks confirm our \autoxthreed models outperform existing ones in accuracy up to 1.7% under similar FLOPs, and reduce the computational cost up to 1.74 times to reach similar performance. Code will be publicly available.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Jiang_2022_WACV, author = {Jiang, Yifan and Gong, Xinyu and Wu, Junru and Shi, Humphrey and Yan, Zhicheng and Wang, Zhangyang}, title = {Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {2554-2563} }