Compressive Quantization for Fast Object Instance Search in Videos
Tan Yu, Zhenzhen Wang, Junsong Yuan; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 726-735
Abstract
Most of current visual search systems focus on image-to-image (point-to-point) search such as image and object retrieval. Nevertheless, fast image-to-video (point-to-set) search is much less exploited. This paper tackles object instance search in videos, where efficient point-to-set matching is essential. Through jointly optimizing vector quantization and hashing, we propose compressive quantization method to compress M object proposals extracted from each video into only k binary codes, where k<< M. Then the similarity between the query object and the whole video can be determined by the Hamming distance between the query's binary code and the video's best-matched binary code. Our compressive quantization not only enables fast search but also significantly reduces the memory cost of storing the video features. Despite the high compression ratio, our proposed compressive quantization still can effectively retrieve small objects in large video datasets. Systematic experiments on three benchmark datasets verify the effectiveness and efficiency of our compressive quantization.
Related Material
[pdf]
[
bibtex]
@InProceedings{Yu_2017_ICCV,
author = {Yu, Tan and Wang, Zhenzhen and Yuan, Junsong},
title = {Compressive Quantization for Fast Object Instance Search in Videos},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}