- [pdf] [code]
DILane: Dynamic Instance-Aware Network for Lane Detection
Lane detection is a challenging task in computer vision and a critical technology in autonomous driving. The task requires the prediction of the topology of lane lines in complex scenarios; moreover, different types and instances of lane lines need to be distinguished. Most existing studies are based only on a single-level feature map extracted by deep neural networks. However, both high-level and low-level features are important for lane detection, because lanes are easily affected by illumination and occlusion, i.e., texture information is unavailable in non-visual evidence case; when the lanes are clearly visible, the curved and slender texture information plays a more important role in improving the detection accuracy. In this study, the proposed DILane utilizes both high-level and low-level features for accurate lane detection. First, in contrast to mainstream detection methods of predefined fixed-position anchors, we define learnable anchors to perform statistics of potential lane locations. Second, we propose a dynamic head aiming at leveraging low-level texture information to conditionally enhance high-level semantic features for each proposed instance. Finally, we present a self-attention module to gather global information in parallel, which remarkably improves detection accuracy. The experimental results on two mainstream public benchmarks demonstrate that our proposed method outperforms previous works with the F1 score of 79.43% for CULane and 97.80% for TuSimple dataset while achieving 148+ FPS.