TorchAdapt: Towards Light-Agnostic Real-Time Visual Perception

Khurram Azeem Hashmi, Karthik Palyakere Suresh, Didier Stricker, Muhammad Zeshan Afzal; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 5645-5656

Abstract


Low-light conditions significantly degrade the performance of high-level vision tasks. Existing approaches either enhance low-light images without considering normal illumination scenarios, leading to poor generalization, or are tailored to specific tasks. We propose TorchAdapt, a realtime adaptive feature enhancement framework that generalizes robustly across varying illumination conditions without degrading performance in well-lit scenarios. TorchAdapt consists of two complementary modules: the Torch module enhances semantic features beneficial for downstream tasks, while the Adapt module dynamically modulates these enhancements based on input content. Leveraging a novel light-agnostic learning strategy, TorchAdapt aligns feature representations of enhanced and well-lit images to produce powerful illumination-invariant features. Extensive experiments on multiple high-level vision tasks, including object detection, face detection, instance segmentation, semantic segmentation, and video object detection, demonstrate that TorchAdapt consistently outperforms state-of-the-art lowlight enhancement and task-specific methods in both lowlight and light-agnostic settings. TorchAdapt thus provides a unified, flexible solution for robust visual perception across diverse lighting conditions.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Hashmi_2025_ICCV, author = {Hashmi, Khurram Azeem and Suresh, Karthik Palyakere and Stricker, Didier and Afzal, Muhammad Zeshan}, title = {TorchAdapt: Towards Light-Agnostic Real-Time Visual Perception}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {5645-5656} }