D2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos

Christian Schmidt, Ali Athar, Sabarinath Mahadevan, Bastian Leibe; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 1200-1209

Abstract


Despite receiving significant attention from the research community, the task of segmenting and tracking objects in monocular videos still has much room for improvement. Existing works have simultaneously justified the efficacy of dilated and deformable convolutions for various image-level segmentation tasks. This gives reason to believe that 3D extensions of such convolutions should also yield performance improvements for video-level segmentation tasks. However, this aspect has not yet been explored thoroughly in existing literature. In this paper, we propose Dynamic Dilated Convolutions (D2Conv3D): a novel type of convolution which draws inspiration from dilated and deformable convolutions and extends them to the 3D (spatio-temporal) domain. We experimentally show that D2Conv3D can be used to improve the performance of multiple 3D CNN architectures across multiple video segmentation related benchmarks by simply employing D2Conv3D as a drop-in replacement for standard convolutions. We further show that D2Conv3D out-performs trivial extensions of existing dilated and deformable convolutions to 3D. Lastly, we set a new state-of-the-art on the DAVIS 2016 Unsupervised Video Object Segmentation benchmark. Code is made publicly available at https://github.com/Schmiddo/d2conv3d.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Schmidt_2022_WACV, author = {Schmidt, Christian and Athar, Ali and Mahadevan, Sabarinath and Leibe, Bastian}, title = {D2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1200-1209} }