Learning Optimized Low-Light Image Enhancement for Edge Vision Tasks

S M A Sharif, Azamat Myrzabekov, Nodirkhuja Khudjaev, Roman Tsoy, Seongwan Kim, Jaeho Lee; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 6373-6383

Abstract


Low-light image enhancement (LLIE) has a significant role in edge vision applications (EVA). Despite its widespread practicability the existing LLIE methods are impractical due to their high computational costs. This study proposed a framework to learn optimized low-light image enhancement to tackle the limitations of existing enhancement methods for accelerating EVA. The proposed framework incorporates a lightweight and mobile-friendly deep network. We optimized our proposed model with INT8 precision with a post-training quantization strategy and deployed it on an edge device. The LLIE model has achieved over 199 frames per second (FPS) on a low-power edge board. Additionally we evaluated the practicability of an optimized model for accelerating the vision application of an edge environment. The experimental results illustrate that our optimized method can significantly accelerate the performance of SOTA vision algorithms in challenging lowlight conditions for numerous everyday vision tasks including object detection and image registration.

Related Material


[pdf]
[bibtex]
@InProceedings{A_Sharif_2024_CVPR, author = {A Sharif, S M and Myrzabekov, Azamat and Khudjaev, Nodirkhuja and Tsoy, Roman and Kim, Seongwan and Lee, Jaeho}, title = {Learning Optimized Low-Light Image Enhancement for Edge Vision Tasks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {6373-6383} }