The Animation Transformer: Visual Correspondence via Segment Matching

Evan Casey, Víctor Pérez, Zhuoru Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11323-11332

Abstract


Visual correspondence is a fundamental building block on the way to building assistive tools for hand-drawn animation. However, while a large body of work has focused on learning visual correspondences at the pixel-level, few approaches have emerged to learn correspondence at the level of line enclosures (segments) that naturally occur in hand-drawn animation. Exploiting this structure in animation has numerous benefits: it avoids the memory complexity of pixel attention over high resolution images and enables the use of real-world animation datasets that contain correspondence information at the level of per-segment colors. To that end, we propose the Animation Transformer (AnT) which uses a Transformer-based architecture to learn the spatial and visual relationships between segments across a sequence of images. By leveraging a forward match loss and a cycle consistency loss our approach attains excellent results compared to state-of-the-art pixel approaches on challenging datasets from real animation productions that lack ground-truth correspondence labels.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Casey_2021_ICCV, author = {Casey, Evan and P\'erez, V{\'\i}ctor and Li, Zhuoru}, title = {The Animation Transformer: Visual Correspondence via Segment Matching}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {11323-11332} }