SpotPatch: Parameter-Efficient Transfer Learning for Mobile Object Detection

Keren Ye, Adriana Kovashka, Mark Sandler, Menglong Zhu, Andrew Howard, Marco Fornoni; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


Deep learning based object detectors are commonly deployed on mobile devices to solve a variety of tasks. For maximum accuracy, each detector is usually trained to solve one single specific task, and comes with a completely independent set of parameters. While this guarantees high performance, it is also highly inefficient, as each model has to be separately downloaded and stored. In this paper we address the question: can task-specific detectors be trained and represented as a shared set of weights, plus a very small set of additional weights for each task? The main contributions of this paper are the following: 1) we perform the first systematic study of parameter-efficient transfer learning techniques for object detection problems; 2) we propose a technique to learn a model patch with a size that is dependent on the difficulty of the task to be learned, and validate our approach on 10 different object detection tasks. Our approach achieves similar accuracy as previously proposed approaches, while being significantly more compact.

Related Material


[pdf]
[bibtex]
@InProceedings{Ye_2020_ACCV, author = {Ye, Keren and Kovashka, Adriana and Sandler, Mark and Zhu, Menglong and Howard, Andrew and Fornoni, Marco}, title = {SpotPatch: Parameter-Efficient Transfer Learning for Mobile Object Detection}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }