EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss

Zhuoyang Zhang,Han Cai,Song Han; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 7859-7863

Abstract


We present EfficientViT-SAM a new family of accelerated segment anything models. We retain SAM's lightweight prompt encoder and mask decoder while replacing the heavy image encoder with EfficientViT. For the training we begin with the knowledge distillation from the SAM-ViT-H image encoder to EfficientViT. Subsequently we conduct end-to-end training on the SA-1B dataset. Benefiting from EfficientViT's efficiency and capacity EfficientViT-SAM delivers 48.9 times measured TensorRT speedup on A100 GPU over SAM-ViT-H without sacrificing performance. Our code and pre-trained models are released at https://github.com/mit-han-lab/efficientvit.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Zhuoyang and Cai, Han and Han, Song}, title = {EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7859-7863} }