-
[pdf]
[supp]
[bibtex]@InProceedings{Kian_2025_ICCV, author = {Kian, Setareh and Brooks-Lehnert, Shannon and Hirakawa, Keigo}, title = {Cross-Camera Module Training of Raw Sensor Data-Based Automotive Machine Vision: Challenges and Solutions}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {4537-4546} }
Cross-Camera Module Training of Raw Sensor Data-Based Automotive Machine Vision: Challenges and Solutions
Abstract
Contrary to the popular object detection models that are trained on full RGB (Red, Green, Blue) image datasets derived from sensors with RGGB (Red, Green, Green, Blue) color filter array (CFA) pattern, automotive machine vision systems employ a variety of CFA patterns. Automotive manufacturers deploy various camera modules across their vehicle models and model years, making the forward compatibility a high priority. We propose a "cross-camera module training" of raw sensor data-based (RSDB) object detection, reducing the annotation costs via knowledge distillation to leverage a neural network pre-trained on a real-world sensor (e.g. RGGB) to guide the unsupervised learning for other sensors (e.g. RCCB or Red, Clear, Clear, Blue). We detail the incompatibility of knowledge distillation and the raw sensor data because of the artifacts caused by CFA-sampled image registration. Based on rigorous signal sampling theory proving that the influence of CFA vanishes in feature space, we propose a novel feature registration as a work-around. Evaluated on a new cross-camera module automotive image dataset, the precision at the 90% recall rate of the resultant forward compatible RSDB object detection was higher than the RGB-based model.
Related Material
