-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Raajesh_2024_CVPR, author = {Raajesh, Haran and Desanur, Naveen Reddy and Khan, Zeeshan and Tapaswi, Makarand}, title = {MICap: A Unified Model for Identity-Aware Movie Descriptions}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {14011-14021} }
MICap: A Unified Model for Identity-Aware Movie Descriptions
Abstract
Characters are an important aspect of any storyline and identifying and including them in descriptions is necessary for story understanding. While previous work has largely ignored identity and generated captions with someone (anonymized names) recent work formulates id-aware captioning as a fill-in-the-blanks (FITB) task where given a caption with blanks the goal is to predict person id labels. However to predict captions with ids a two-stage approach is required: first predict captions with someone then fill in identities. In this work we present a new single stage approach that can seamlessly switch between id-aware caption generation or FITB when given a caption with blanks. Our model Movie-Identity Captioner (MICap) uses a shared auto-regressive decoder that benefits from training with FITB and full-caption generation objectives while the encoder can benefit from or disregard captions with blanks as input. Another challenge with id-aware captioning is the lack of a metric to capture subtle differences between person ids. To this end we introduce iSPICE a caption evaluation metric that focuses on identity tuples created through intermediate scene graphs. We evaluate MICap on Large-Scale Movie Description Challenge (LSMDC) where we show a 4.2% improvement in FITB accuracy and a 1-2% bump in classic captioning metrics.
Related Material