EXIF As Language: Learning Cross-Modal Associations Between Images and Camera Metadata

Chenhao Zheng, Ayush Shrivastava, Andrew Owens; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 6945-6956

Abstract


We learn a visual representation that captures information about the camera that recorded a given photo. To do this, we train a multimodal embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model represents this metadata by simply converting it to text and then processing it with a transformer. The features that we learn significantly outperform other self-supervised and supervised features on downstream image forensics and calibration tasks. In particular, we successfully localize spliced image regions "zero shot" by clustering the visual embeddings for all of the patches within an image.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Zheng_2023_CVPR, author = {Zheng, Chenhao and Shrivastava, Ayush and Owens, Andrew}, title = {EXIF As Language: Learning Cross-Modal Associations Between Images and Camera Metadata}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {6945-6956} }