Perceptual Inductive Bias Is What You Need Before Contrastive Learning

Junru Zhao, Tianqin Li, Dunhan Jiang, Shenghao Wu, Alan Ramirez, Tai Sing Lee; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 9621-9630

Abstract


David Marr's seminal theory of human perception stipulates that visual processing is a multi-stage process, prioritizing the derivation of boundary and surface properties before forming semantic object representations. In contrast, contrastive representation learning frameworks typically bypass this explicit multi-stage approach, defining their objective as the direct learning of a semantic representation space for objects. While effective in general contexts, this approach sacrifices the inductive biases of vision, leading to slower convergence speed and learning shortcut resulting in texture bias. In this work, we demonstrate that leveraging Marr's multi-stage theory--by first constructing boundary and surface-level representations using perceptual constructs from early visual processing stages and subsequently training for object semantics--leads to 2x faster convergence on ResNet18, improved final representations on semantic segmentation, depth estimation, and object recognition, and enhanced robustness and out-of-distribution capability. Together, we propose a pretraining stage before the general contrastive representation pretraining to further enhance the final representation quality and reduce the overall convergence time via inductive bias from human vision systems.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhao_2025_CVPR, author = {Zhao, Junru and Li, Tianqin and Jiang, Dunhan and Wu, Shenghao and Ramirez, Alan and Lee, Tai Sing}, title = {Perceptual Inductive Bias Is What You Need Before Contrastive Learning}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {9621-9630} }