PanoPoint: Self-Supervised Feature Points Detection and Description for 360deg Panorama

Hengzhi Zhang, Hong Yi, Haijing Jia, Wei Wang, Makoto Odamaki; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 6449-6458

Abstract


We introduce PanoPoint, the joint feature point detection and description applied to the nonlinear distortions and the multi-view geometry problems between 360deg panoramas. Our fully convolutional model operates directly in panoramas and computes pixel-level feature point locations and associated descriptors in a single forward pass rather than performing image preprocessing (e.g. panorama to Cubemap) followed by feature detection and description. To train the PanoPoint model, we propose PanoMotion, which simulates the representation between different viewpoints and generates warped panoramas. Moreover, we propose PanoMotion Adaptation, a multi-viewpoint adaptation annotation approach for boosting feature point detection repeatability instead of manual labelling. We train on the annotated synthetic dataset generated by the above method, which outperforms the traditional and other learned approaches and achieves state-of-the-art results on repeatability, localization accuracy, point correspondence precision and real-time metrics, especially for panoramas with significant viewpoint and illumination changes.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2023_CVPR, author = {Zhang, Hengzhi and Yi, Hong and Jia, Haijing and Wang, Wei and Odamaki, Makoto}, title = {PanoPoint: Self-Supervised Feature Points Detection and Description for 360deg Panorama}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {6449-6458} }