Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction

Inhwan Bae, Junoh Lee, Hae-Gon Jeon; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 753-766

Abstract


Language models have demonstrated impressive ability in context understanding and generative performance. Inspired by the recent success of language foundation models in this paper we propose LMTraj (Language-based Multimodal Trajectory predictor) which recasts the trajectory prediction task into a sort of question-answering problem. Departing from traditional numerical regression models which treat the trajectory coordinate sequence as continuous signals we consider them as discrete signals like text prompts. Specially we first transform an input space for the trajectory coordinate into the natural language space. Here the entire time-series trajectories of pedestrians are converted into a text prompt and scene images are described as text information through image captioning. The transformed numerical and image data are then wrapped into the question-answering template for use in a language model. Next to guide the language model in understanding and reasoning high-level knowledge such as scene context and social relationships between pedestrians we introduce an auxiliary multi-task question and answering. We then train a numerical tokenizer with the prompt data. We encourage the tokenizer to separate the integer and decimal parts well and leverage it to capture correlations between the consecutive numbers in the language model. Lastly we train the language model using the numerical tokenizer and all of the question-answer prompts. Here we propose a beam-search-based most-likely prediction and a temperature-based multimodal prediction to implement both deterministic and stochastic inferences. Applying our LMTraj we show that the language-based model can be a powerful pedestrian trajectory predictor and outperforms existing numerical-based predictor methods. Extensive experiments show that our LMTraj can successfully understand social relationships and accurately extrapolate the multimodal futures on the public pedestrian trajectory prediction benchmark. Code is publicly available at https://github.com/inhwanbae/LMTrajectory.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Bae_2024_CVPR, author = {Bae, Inhwan and Lee, Junoh and Jeon, Hae-Gon}, title = {Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {753-766} }