Online Multi-Task Clustering for Human Motion Segmentation

Gan Sun, Yang Cong, Lichen Wang, Zhengming Ding, Yun Fu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0


Human motion segmentation in time space becomes attractive recently due to its wide range of potential applications on action recognition, event detection, and scene understanding tasks. However, most existing state-of-the-arts address this problem upon an offline and single-agent scenario, while there are a lot of urgent requirements to segment videos captured from multiple agents for real-time application (e.g., surveillance system). In this paper, we propose an Online Multi-task Clustering (OMTC) model for an online and multi-agent segmentation scenario, where each agent corresponds to one task. Specifically, a linear autoencoder framework is designed to project motion sequences into a common motion-aware space across multiple collaborating tasks, while the decoder obtains motion-aware representation of each task via a temporal preserved regularizer. To tackle distribution shifts problem between each pair of tasks, the task-specific projections are further proposed to align representation across the motion segmentation tasks. By this way, significant motion knowledge can be shared among multiple tasks, and the temporal data structures are also well preserved. For the model optimization, an efficient and effective online optimization mechanism is derived to solve the large-scale formulation in real-time applications. Experiment results on Keck, MAD and our collected human motion datasets demonstrate the robustness, high-accuracy and efficiency of our OMTC model.

Related Material

author = {Sun, Gan and Cong, Yang and Wang, Lichen and Ding, Zhengming and Fu, Yun},
title = {Online Multi-Task Clustering for Human Motion Segmentation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}