-
[pdf]
[bibtex]@InProceedings{Zhu_2025_ICCV, author = {Zhu, Huachao and Liu, Zelong and Sun, Zhichao and Zou, Yuda and Xia, Gui-Song and Xu, Yongchao}, title = {Beyond Pixel Uncertainty: Bounding the OoD Objects in Road Scenes}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {8472-8481} }
Beyond Pixel Uncertainty: Bounding the OoD Objects in Road Scenes
Abstract
Recognizing out-of-distribution (OoD) objects on roads is crucial for safe driving. Most existing methods rely on segmentation models' uncertainty as anomaly scores, often resulting in false positives - especially at ambiguous regions like boundaries, where segmentation models inherently exhibit high uncertainty. Additionally, it is challenging to define a suitable threshold to generate anomaly masks, especially with the inconsistencies in predictions across consecutive frames. We propose DetSeg, a novel paradigm that helps incorporate object-level understanding. DetSeg first detects all objects in the open world and then suppresses in-distribution (ID) bounding boxes, leaving only OoD proposals. These proposals can either help previous methods eliminate false positives (DetSeg-\mathcal R ), or generate binary anomaly masks without complex threshold search when combined with a box-prompted segmentation module (DetSeg-\mathcal S ). Additionally, we introduce vanishing point guided Hungarian matching (VPHM) to smooth the prediction results within a video clip, mitigating abrupt variations of predictions between consecutive frames. Comprehensive experiments on various benchmarks demonstrate that DetSeg significantly improves performance, reducing the FPR\it _ 95 of previous methods by up to 37.45%, offering a more robust and practical solution. Code: https://github.com/huachao0124/DetSeg-official.
Related Material
