Semantic Video-to-Video Search Using Sub-graph Grouping and Matching

Tae Eun Choe, Hongli Deng, Feng Guo, Mun Wai Lee, Niels Haering; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 787-794

Abstract


We propose a novel video event retrieval algorithm given a video query containing grouped events from large scale video database. Rather than looking for similar scenes using visual features as conventional image retrieval algorithms do, we search for the similar semantic events (e.g. finding a video such that a person parks a vehicle and meets with other person and exchanges a bag). Videos are analyzed semantically and represented by a graphical structure. Now the problem is to match the graph with other graphs of events in the database. Since the query video may include noisy activities or some event may not be detected by the semantic video analyzer, exact graph matching does not always work. For efficient and effective solution, we introduce a novel subgraph indexing and matching scheme. Subgraphs are grouped and their importance is further learned over video by topic learning algorithms. After grouping and indexing subgraphs, the complex graph matching problem becomes simple vector comparison in reduced dimension. The performances are extensively evaluated and compared with each approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Eun_2013_ICCV_Workshops,
author = {Tae Eun Choe and Hongli Deng and Feng Guo and Mun Wai Lee and Niels Haering},
title = {Semantic Video-to-Video Search Using Sub-graph Grouping and Matching},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}
}