-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Mu_2024_CVPR, author = {Mu, Norman and Ji, Jingwei and Yang, Zhenpei and Harada, Nate and Tang, Haotian and Chen, Kan and Qi, Charles R. and Ge, Runzhou and Goel, Kratarth and Yang, Zoey and Ettinger, Scott and Al-Rfou, Rami and Anguelov, Dragomir and Zhou, Yin}, title = {MoST: Multi-Modality Scene Tokenization for Motion Prediction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {14988-14999} }
MoST: Multi-Modality Scene Tokenization for Motion Prediction
Abstract
Many existing motion prediction approaches rely on symbolic perception outputs to generate agent trajectories such as bounding boxes road graph information and traffic lights. This symbolic representation is a high-level abstraction of the real world which may render the motion prediction model vulnerable to perception errors (e.g. failures in detecting open-vocabulary obstacles) while missing salient information from the scene context (e.g. poor road conditions). An alternative paradigm is end-to-end learning from raw sensors. However this approach suffers from the lack of interpretability and requires significantly more training resources. In this work we propose tokenizing the visual world into a compact set of scene elements and then leveraging pre-trained image foundation models and LiDAR neural networks to encode all the scene elements in an open-vocabulary manner. The image foundation model enables our scene tokens to encode the general knowledge of the open world while the LiDAR neural network encodes geometry information. Our proposed representation can efficiently encode the multi-frame multi-modality observations with a few hundred tokens and is compatible with most transformer-based architectures. To evaluate our method we have augmented Waymo Open Motion Dataset with camera embeddings. Experiments over Waymo Open Motion Dataset show that our approach leads to significant performance improvements over the state-of-the-art.
Related Material