Skill Transformer: A Monolithic Policy for Mobile Manipulation

Xiaoyu Huang, Dhruv Batra, Akshara Rai, Andrew Szot; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 10852-10862

Abstract


We present Skill Transformer, an approach for solving long-horizon robotic tasks by combining conditional sequence modeling and skill modularity. Conditioned on egocentric and proprioceptive observations of a robot, Skill Transformer is trained end-to-end to predict both a high-level skill (e.g., navigation, picking, placing), and a whole-body low-level action (e.g., base and arm motion), using a transformer architecture and demonstration trajectories that solve the full task. It retains the composability and modularity of the overall task through a skill predictor module while reasoning about low-level actions and avoiding hand-off errors, common in modular approaches. We test Skill Transformer on an embodied rearrangement benchmark and find it performs robust task planning and low-level control in new scenarios, achieving a 2.5x higher success rate than baselines in hard rearrangement problems.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Huang_2023_ICCV, author = {Huang, Xiaoyu and Batra, Dhruv and Rai, Akshara and Szot, Andrew}, title = {Skill Transformer: A Monolithic Policy for Mobile Manipulation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {10852-10862} }