DeepLanes: End-To-End Lane Position Estimation Using Deep Neural Networksa

Alexandru Gurghian, Tejaswi Koduri, Smita V. Bailur, Kyle J. Carey, Vidya N. Murali; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2016, pp. 38-45

Abstract


Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fully-autonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre- or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.

Related Material


[pdf]
[bibtex]
@InProceedings{Gurghian_2016_CVPR_Workshops,
author = {Gurghian, Alexandru and Koduri, Tejaswi and Bailur, Smita V. and Carey, Kyle J. and Murali, Vidya N.},
title = {DeepLanes: End-To-End Lane Position Estimation Using Deep Neural Networksa},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2016}
}