Spatial-Temporal Consistency Network for Low-Latency Trajectory Forecasting

Shijie Li, Yanying Zhou, Jinhui Yi, Juergen Gall; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1940-1949

Abstract


Trajectory forecasting is a crucial step for autonomous vehicles and mobile robots in order to navigate and interact safely. In order to handle the spatial interactions between objects, graph-based approaches have been proposed. These methods, however, model motion on a frame-to-frame basis and do not provide a strong temporal model. To overcome this limitation, we propose a compact model called Spatial-Temporal Consistency Network (STC-Net). In STC-Net, dilated temporal convolutions are introduced to model long-range dependencies along each trajectory for better temporal modeling while graph convolutions are employed to model the spatial interaction among different trajectories. Furthermore, we propose a feature-wise convolution to generate the predicted trajectories in one pass and refine the forecast trajectories together with the reconstructed observed trajectories. We demonstrate that STC-Net generates spatially and temporally consistent trajectories and outperforms other graph-based methods. Since STC-Net requires only 0.7k parameters and forecasts the future with a latency of only 1.3ms, it advances the state-of-the-art and satisfies the requirements for realistic applications.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Li_2021_ICCV, author = {Li, Shijie and Zhou, Yanying and Yi, Jinhui and Gall, Juergen}, title = {Spatial-Temporal Consistency Network for Low-Latency Trajectory Forecasting}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {1940-1949} }