Embedding Shift Dissection on CLIP: Effects of Augmentations on VLM's Representation Learning

Ashim Dahal, Saydul Akbar Murad, Nick Rahimi; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 4814-4818

Abstract


Understanding the representation shift on Vision Language Models like CLIP under different augmentations provides valuable insights on Mechanistic Interpretability. In this study, we show the shift on CLIP's embeddings on 9 common augmentation techniques: noise, blur, color jitter, scale and rotate, flip, elastic and perspective transforms, random brightness and contrast, and coarse dropout of pixel blocks. We scrutinize the embedding shifts under similarity on attention map, patch, edge, detail preservation, cosine similarity, L2 distance, pairwise distance and dendrogram clusters and provide qualitative analysis on sample images. Our findings suggest certain augmentations like noise, perspective transform and shift scaling have higher degree of drastic impact on embedding shift. This study provides a concrete foundation for future work on VLM's robustness for mechanical interpretation and adversarial data defense.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Dahal_2025_CVPR, author = {Dahal, Ashim and Murad, Saydul Akbar and Rahimi, Nick}, title = {Embedding Shift Dissection on CLIP: Effects of Augmentations on VLM's Representation Learning}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {4814-4818} }