3D Semantic Label Transfer in Human-Robot Collaboration

Dávid Rozenberszki, Gábor Sörös, Szilvia Szeier, András Lőrincz; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 2602-2611

Abstract


We tackle two practical problems in robotic scene understanding. First, the computational requirements of current semantic segmentation algorithms are prohibitive for typical robots. Second, the viewpoints of ground robots are quite different from typical human viewpoints of training datasets which may lead to misclassified objects from robot viewpoints. We present a system for sharing and reusing 3D semantic information between multiple agents with different viewpoints. We first co-localize all agents in the same coordinate system. Next, we create a 3D dense semantic model of the space from human viewpoints close to real time. Finally, by re-rendering the model's semantic labels (and/or depth maps) from the ground robots' own estimated viewpoints and sharing them over the network, we can give 3D semantic understanding to simpler agents. We evaluate the reconstruction quality and show how tiny robots can reuse knowledge about the space collected by more capable peers.

Related Material


[pdf]
[bibtex]
@InProceedings{Rozenberszki_2021_ICCV, author = {Rozenberszki, D\'avid and S\"or\"os, G\'abor and Szeier, Szilvia and L\H{o}rincz, Andr\'as}, title = {3D Semantic Label Transfer in Human-Robot Collaboration}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {2602-2611} }