TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response Scenarios

Daniel Rossi, Guido Borghi, Roberto Vezzani; Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops, 2025, pp. 376-385

Abstract


Designing efficient neural networks for embedded devices is a critical challenge particularly in applications requiring real-time performance such as aerial imaging with drones and UAVs for emergency responses. In this work we introduce a novel light-weight architecture called TakuNet which employs techniques such as depth-wise convolutions and an early downsampling stem to reduce computational complexity while maintaining accuracy. It leverages dense connections for fast convergence during training and uses 16-bit floating-point precision for optimization on embedded hardware accelerators. Experimental evaluation on public datasets shows that TakuNet achieves near-state-of-the-art accuracy in classifying aerial images of emergency situations despite its minimal parameter count. Real-world tests on embedded devices namely Jetson Orin Nano and Raspberry Pi confirm TakuNet's efficiency achieving more than 650 fps on the 15W Jetson board making it suitable for real-time AI processing on resource-constrained platforms and advancing the applicability of drones in emergency scenarios. The code and implementation details are publicly released.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Rossi_2025_WACV, author = {Rossi, Daniel and Borghi, Guido and Vezzani, Roberto}, title = {TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response Scenarios}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {February}, year = {2025}, pages = {376-385} }