A Meta-Learning Approach for Domain Generalisation Across Visual Modalities in Vehicle Re-Identification
Recent advances in imaging technologies have enabled the usage of infrared spectrum data for computer vision tasks previously working with traditional RGB data, such as re-identification. Infrared spectrum data can provide complementary and consistent visual information in situations of low visibility such as night-time, or adverse environments. However, the main issue that prevents the training of multi-modal systems is the lack of available infrared spectrum data. To this end, it is important to create systems that can easily adapt to data of multiple modalities, at inference time. In this paper, we propose a domain generalisation approach for multi-modal vehicle re-identification based on the recent success of meta-learning training approaches, and evaluate the ability of the model to perform to unseen modality data at testing time. In our experiments we use RGB, near-infrared and thermal-infrared modalities using the RGBNT100 dataset and prove that our meta-learning training configuration can improve the generalisation ability of the trained model compared to traditional training settings.