-
[pdf]
[bibtex]@InProceedings{Ferreira_2025_WACV, author = {Ferreira, Jo\~ao P. K. and Pinto, Jo\~ao P. and Moura, J\'ulia and Li, Yi and Castro, Cristiano L. and Angelov, Plamen}, title = {Vision-Based Landing Guidance through Tracking and Orientation Estimation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {9663-9671} }
Vision-Based Landing Guidance through Tracking and Orientation Estimation
Abstract
Fixed-wing aerial vehicles are equipped with functionalities such as ILS (instrument landing system) PAR (precision approach radar) and DGPS (differential global positioning system) enabling fully automated landings. However these systems impose significant costs on airport operations due to high installation and maintenance requirements. Moreover since these navigation parameters come from ground or satellite signals they are vulnerable to interference. A more cost-effective and independent alternative for guiding landing is a vision-based system that detects the runway and aligns the aircraft reducing the pilot's cognitive load. This paper proposes a novel framework that addresses three key challenges in developing autonomous vision-based landing systems. Firstly to overcome the lack of aerial front-view video data we created high-quality videos simulating landing approaches through the generator code available in the LARD (landing approach runway detection dataset) repository. Secondly in contrast to former studies focusing on object detection for finding the runway we chose the state-of-the-art model LoRAT to track runways within bounding boxes in each video frame. Thirdly to align the aircraft with the designated landing runway we extract runway keypoints from the resulting LoRAT frames and estimate the camera relative pose via the Perspective-n-Point algorithm. Our experimental results over a dataset of generated videos and original images from the LARD dataset consistently demonstrate the proposed framework's highly accurate tracking and alignment capabilities. Our approach source code and the LoRAT model pre-trained with LARD videos are available at https://github.com/jpklock2/vision-based-landing-guidance
Related Material